They say you learn more from failure than success. If so, this is the story for you: MIT Technology Review’s annual roll call of the biggest flops, flimflams, and fiascos in all domains of technology.
Some of the foul-ups were funny, like the “woke” AI which got Google in trouble after it drew Black Nazis. Some caused lawsuits, like a computer error by CrowdStrike that left thousands of Delta passengers stranded. We also reaped failures among startups that raced to expand from 2020 to 2022, a period of ultra-low interest rates. But then the economic winds shifted. Money wasn’t free anymore. The result? Bankruptcy and dissolution for companies whose ambitious technological projects, from vertical farms to carbon credits, hadn’t yet turned a profit and might never do so.
Read on.
Woke AI blunder
GOOGLE GEMINI VIA X.COM/END WOKENESS
People worry about bias creeping into AI. But what if you add bias on purpose? Thanks to Google, we know where that leads: Black Vikings and female popes.
Google’s Gemini AI image feature, launched last February, had been tuned to zealously showcase diversity, damn the history books. Ask Google for a picture of German soldiers from World War II, and it would create a Benetton ad in Wehrmacht uniforms.
Critics pounced and Google beat an embarrassed retreat. It paused Gemini’s ability to draw people and agreed its well-intentioned effort to be inclusive had “missed the mark.”
The free version of Gemini still won’t create images of people. But paid versions will. When we asked for an image of 12 CEOs of public biotech companies, the software produced a photographic-quality image of middle-aged white men. Less than ideal. But closer to the truth.
Boeing, we have a problem. And it’s your long-delayed reusable spaceship, the Starliner, which stranded NASA astronauts Sunita “Suni” Williams and Barry “Butch” Wilmore on the International Space Station.
The June mission was meant to be a quick eight-day round trip to test Starliner before it embarked on longer missions. But, plagued by helium leaks and thruster problems, it had to come back empty.
Now Butch and Suni won’t return to Earth until 2025, when a craft from Boeing competitor SpaceX is scheduled to bring them home.
Credit Boeing and NASA with putting safety first. But this wasn’t Boeing’s only malfunction during 2024. The company began the year with a door blowing off one of its planes midflight, faced a worker strike, agreed to a major fine for misleading the government about the safety of its 737 Max airplane (which made our 2019 list of worst technologies), and saw its CEO step down in March.
After the Starliner fiasco, Boeing fired the chief of its space and defense unit. “At this critical juncture, our priority is to restore the trust of our customers and meet the high standards they expect of us to enable their critical missions around the world,” Boeing’s new CEO, Kelly Ortberg, said in a memo.
The motto of the cybersecurity company CrowdStrike is “We stop breaches.” And it’s true: No one can breach your computer if you can’t turn it on.
That’s exactly what happened to many people on July 19, when thousands of Windows computers at airlines, TV stations, and hospitals started displaying the “blue screen of death.”
The cause wasn’t hackers or ransomware. Instead, those computers were stuck in a boot loop because of a bad update shipped by CrowdStrike itself. CEO George Kurtz jumped on X to say the “issue” had been identified as a “defect” in a single computer file.
So who is liable? CrowdStrike customer Delta Airlines, which canceled 7,000 flights, is suing for $500 million. It alleges that the security firm caused a “global catastrophe” when it took “uncertified and untested shortcuts.”
CrowdStrike countersued. It says Delta’s management is to blame for its troubles and that the airline is due little more than a refund.
Grow lettuce in buildings using robots, hydroponics, and LED lights. That’s what Bowery, a “vertical farming” startup, raised over $700 million to do. But in November, Bowery went bust, making it the biggest startup failure of the year, according to the business analytics firm CB Insights.
Bowery claimed that vertical farms were “100 times more productive” per square foot than traditional farms, since racks of plants could be stacked 40 feet high. In reality, the company’s lettuce was more expensive, and when a stubborn plant infection spread through its East Coast facilities, Bowery had trouble delivering the green stuff at any price.
The deadly attack was diabolically clever. Israel set up shell companies that sold thousands of pagers packed with explosives to the Islamic faction, which was already worried that its phones were being spied on.
A coup for Israel’s spies. But was it a war crime? A 1996 treaty prohibits intentionally manufacturing “apparently harmless objects” designed to explode. The New York Times says nine-year-old Fatima Abdullah died when her father’s booby-trapped beeper chimed and she raced to take it to him.
The company that pioneered direct-to-consumer gene testing is sinking fast. Its stock price is going toward zero, and a plan to create valuable drugs is kaput after that team got pink slips this November.
23andMe always had a celebrity aura, bathing in good press. Now, though, the press is all bad. It’s a troubled company in the grip of a controlling founder, Anne Wojcicki, after its independent directors resigned en masse this September. Customers are starting to worry about what’s going to happen to their DNA data if 23andMe goes under.
23andMe says it created “the world’s largest crowdsourced platform for genetic research.” That’s true. It just never figured out how to turn a profit.
Slop is the scraps and leftovers that pigs eat. “AI slop” is what you and I are increasingly consuming online now that people are flooding the internet with computer-generated text and pictures.
AI slop is “dubious,” says the New York Times, and “dadaist,” according to Wired. It’s frequently weird, like Shrimp Jesus (don’t ask if you don’t know), or deceptive, like the picture of a shivering girl in a rowboat, supposedly showing the US government’s poor response to Hurricane Helene.
AI slop is often entertaining. AI slop is usually a waste of your time. AI slop is not fact-checked. AI slop exists mostly to get clicks. AI slop is that blue-check account on X posting 10-part threads on how great AI is—threads that were written by AI.
Most of all, AI slop is very, very common. This year, researchers claimed that about half the long posts on LinkedIn and Medium were partly AI-generated.
Your business creates emissions that contribute to global warming. So why not pay to have some trees planted or buy a more efficient cookstove for someone in Central America? Then you could reach net-zero emissions and help save the planet.
Neat idea, but good intentions aren’t enough. This year the carbon marketplace Nori shut down, and so did Running Tide, a firm trying to sink carbon into the ocean. “The problem is the voluntary carbon market is voluntary,” Running Tide’s CEO wrote in a farewell post, citing a lack of demand.
While companies like to blame low demand, it’s not the only issue. Sketchy technology, questionable credits, and make-believe offsets have created a credibility problem in carbon markets. In October, US prosecutors charged two men in a $100 million scheme involving the sale of nonexistent emissions savings.
There’s no denying that the AI industry moves fast. Each week brings a bold new announcement, product release, or lofty claim that pushes the bounds of what we previously thought was possible. Separating AI fact from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.
Our first index is a white-knuckle ride that ranges from the outright depressing—rising numbers of sexually explicit deepfakes; the complete lack of rules governing Elon Musk’s Grok AI model—to the bizarre, including AI-powered dating wingmen and startup Friend’s dorky intelligent-jewelry line.
But it’s not all a horror show—at least not entirely. AI is being used for more wholesome endeavors, too, like simulating the classic video game Doom without a traditional gaming engine. Elsewhere, AI models have gotten so good at table tennis they can now beat beginner-level human opponents. They’re also giving us essential insight into the secret names monkeys use to communicate with one another. Because while AI may be a lot of things, it’s never boring.
In an essay last week, Sam Altman, the CEO of OpenAI, argued that the accelerating capabilities of AI will usher in an idyllic “Intelligence Age,” unleashing “unimaginable” prosperity and “astounding triumphs” like “fixing the climate.”
It’s a promise that no one is in a position to make—and one that, when it comes to the topic of climate change, fundamentally misunderstands the nature of the problem.
More maddening, the argument suggests that the technology’s massive consumption of electricity today doesn’t much matter, since it will allow us to generate abundant clean power in the future. That casually waves away growing concerns about a technology that’s already accelerating proposals for natural-gas plants and diverting major tech companies from their corporate climate targets.
By all accounts, AI’s energy demands will only continue to increase, even as the world scrambles to build larger, cleaner power systems to meet the increasing needs of EV charging, green hydrogen production, heat pumps, and other low-carbon technologies. Altman himself reportedly just met with White House officials to make the case for building absolutely massive AI data centers, which could require the equivalent of five dedicated nuclear reactors to run.
It’s a bedrock perspective of MIT Technology Review that technological advances can deliver real benefits and accelerate societal progress in meaningful ways. But for decades researchers and companies have oversold the potential of AI to deliver blockbuster medicines, achieve super intelligence, and free humanity from the need to work. To be fair, there have been significant advances, but nothing on the order of what’s been hyped.
Given that track record, I’d argue you need to develop a tool that does more than plagiarize journalism and help students cheat on homework before you can credibly assert that it will solve humanity’s thorniest problems, whether the target is rampant poverty or global warming.
All those advances are still relatively incremental. But let’s say AI does bring about an energy miracle. Perhaps its pattern-recognition prowess will deliver the key insight that finally cracks fusion—a technology that Altman is betting on heavily as an investor.
That would be fantastic. But technological advances are just the start—necessary but far from sufficient to eliminate the world’s climate emissions.
How do I know?
Because between nuclear fission plants, solar farms, wind turbines, and batteries, we already have every technology we need to clean up the power sector. This should be the low-hanging fruit of the energy transition. Yet in the largest economy on Earth, fossil fuels still generate 60% of the electricity. The fact that so much of our power still comes from coal, petroleum, and natural gas is a regulatory failure as much as a technological one.
“As long as we effectively subsidize fossil fuels by allowing them to use the atmosphere as a waste dump, we are not allowing clean energy to compete on a level playing field,” Zeke Hausfather, a climate scientist at the independent research organization Berkeley Earth, wrote on X in a response to Altman’s post. “We need policy changes, not just tech breakthroughs, to meet our climate goals.”
That’s not to say there aren’t big technical problems we still need to solve. Just look at the continuing struggles to develop clean, cost-competitive ways of fertilizing crops or flying planes. But the fundamental challenges of climate change are sunk costs, development obstacles, and inertia.
We’ve built and paid for a global economy that spews out planet-warming gases, investing trillions of dollars in power plants, steel mills, factories, jets, boilers, water heaters, stoves, and SUVs that run on fossil fuels. And few people or companies will happily write off those investments so long as those products and plants still work. AI can’t remedy all that just by generating better ideas.
To raze and replace the machinery of every industry around the world at the speed now required, we will need increasingly aggressive climate policies that incentivize or force everyone to switch to cleaner plants, products, and practices.
But with every proposal for a stricter law or some big new wind or solar farm, forces will push back, because the plan will hit someone’s wallet, block someone’s views, or threaten the areas or traditions someone cherishes. Climate change is an infrastructure problem, and building infrastructure is a messy human endeavor.
Tech advances can ease some of these issues. Cheaper, better alternatives to legacy industries make hard choices more politically palatable. But there are no improvements to AI algorithms or underlying data sets that solve the challenge of NIMBYism, the conflict between human interests, or the desire to breathe the fresh air in an unsullied wilderness.
To assert that a single technology—that just happens to be the one your company develops—can miraculously untangle these intractable conflicts of human society is at best self-serving, if not a little naïve. And it’s a troubling idea to proclaim at a point when the growth of that very technology is threatening to undermine the meager progress the world has begun to make on climate change.
As it is, the one thing we can state confidently about generative AI is that it’s making the hardest problem we’ve ever had to solve that much harder to solve.
More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress.
They’re pretty varied. One aims to improve knowledge of AI in public schools, while another is pushing for model developers to disclose what copyrighted material they use in their training. Threedealwith mitigating AI robocalls, while two address biological risks from AI. There’s even a bill that prohibits AI from launching a nuke on its own.
The flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. “There is a sense of urgency. There’s a commitment to addressing this issue, because it is developing so quickly and because it is so crucial to our economy,” says Heather Vaughan, director of communications for the US House of Representatives Committee on Science, Space, and Technology.
Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at all the different bills that are in motion can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.
That’s why, with help from the Brennan Center for Justice, which created a tracker with all the AI bills circulating in various committees in Congress right now, MIT Technology Review has taken a closer look to see if there’s anything we can learn from this legislative smorgasbord.
As you can see, it can seem as if Congress is trying to do everything at once when it comes to AI. To get a better sense of what may actually pass, it’s useful to look at what bills are moving along to potentially become law.
A bill typically needs to pass a committee, or a smaller body of Congress, before it is voted on by the whole Congress. Many will fall short at this stage, while others will simply be introduced and then never spoken of again. This happens because there are so many bills presented in each session, and not all of them are given equal consideration. If the leaders of a party don’t feel a bill from one of its members can pass, they may not even try to push it forward. And then, depending on the makeup of Congress, a bill’s sponsor usually needs to get some members of the opposite party to support it for it to pass. In the current polarized US political climate, that task can be herculean.
Congress has passed legislation on artificial intelligence before. Back in 2020, the National AI Initiative Act was part of the Defense Authorization Act, which invested resources in AI research and provided support for public education and workforce training on AI.
And some of the current bills are making their way through the system. The Senate Commerce Committee pushed through five AI-related bills at the end of July. The bills focused on authorizing the newly formed US AI Safety Institute (AISI) to create test beds and voluntary guidelines for AI models. The other bills focused on expanding education on AI, establishing public computing resources for AI research, and criminalizing the publication of deepfake pornography. The next step would be to put the bills on the congressional calendar to be voted on, debated, or amended.
“The US AI Safety Institute, as a place to have consortium building and easy collaboration between corporate and civil society actors, is amazing. It’s exactly what we need,” says Yacine Jernite, an AI researcher at Hugging Face.
The progress of these bills is a positive development, says Varun Krovi, executive director of the Center for AI Safety Action Fund. “We need to codify the US AI Safety Institute into law if you want to maintain our leadership on the global stage when it comes to standards development,” he says. “And we need to make sure that we pass a bill that provides computing capacity required for startups, small businesses, and academia to pursue AI.”
Following the Senate’s lead, the House Committee on Science, Space, and Technology just passed nine more bills regarding AI on September 11. Those bills focused on improving education on AI in schools, directing the National Institute of Standards and Technology (NIST) to establish guidelines for artificial-intelligence systems, and expanding the workforce of AI experts. These bills were chosen because they have a narrower focus and thus might not get bogged down in big ideological battles on AI, says Vaughan.
“It was a day that culminated from a lot of work. We’ve had a lot of time to hear from members and stakeholders. We’ve had years of hearings and fact-finding briefings on artificial intelligence,” says Representative Haley Stevens, one of the Democratic members of the House committee.
Many of the bills specify that any guidance they propose for the industry is nonbinding and that the goal is to work with companies to ensure safe development rather than curtail innovation.
For example, one of the bills from the House, the AI Development Practices Act, directs NIST to establish “voluntary guidance for practices and guidelines relating to the development … of AI systems” and a “voluntary risk management framework.” Another bill, the AI Advancement and Reliability Act, has similar language. It supports “the development of voluntary best practices and technical standards” for evaluating AI systems.
“Each bill contributes to advancing AI in a safe, reliable, and trustworthy manner while fostering the technology’s growth and progress through innovation and vital R&D,” committee chairman Frank Lucas, an Oklahoma Republican, said in a press release on the bills coming out of the House.
“It’s emblematic of the approach that the US has taken when it comes to tech policy. We hope that we would move on from voluntary agreements to mandating them,” says Krovi.
Avoiding mandates is a practical matter for the House committee. “Republicans don’t go in for mandates for the most part. They generally aren’t going to go for that. So we would have a hard time getting support,” says Vaughan. “We’ve heard concerns about stifling innovation, and that’s not the approach that we want to take.” When MIT Technology Review asked about the origin of these concerns, they were attributed to unidentified “third parties.”
And fears of slowing innovation don’t just come from the Republican side. “What’s most important to me is that the United States of America is establishing aggressive rules of the road on the international stage,” says Stevens. “It’s concerning to me that actors within the Chinese Communist Party could outpace us on these technological advancements.”
But these bills come at a time when big tech companies have ramped up lobbying efforts on AI. “Industry lobbyists are in an interesting predicament—their CEOs have said that they want more AI regulation, so it’s hard for them to visibly push to kill all AI regulation,” says David Evan Harris, who teaches courses on AI ethics at the University of California, Berkeley. “On the bills that they don’t blatantly try to kill, they instead try to make them meaningless by pushing to transform the language in the bills to make compliance optional and enforcement impossible.”
“A [voluntary commitment] is something that is also only accessible to the largest companies,” says Jernite at Hugging Face, claiming that sometimes the ambiguous nature of voluntary commitments allows big companies to set definitions for themselves. “If you have a voluntary commitment—that is, ‘We’re going to develop state-of-the-art watermarking technology’—you don’t know what state-of-the-art means. It doesn’t come with any of the concrete things that make regulation work.”
“We are in a very aggressive policy conversation about how to do this right, and how this carrot and stick is actually going to work,” says Stevens, indicating that Congress may ultimately draw red lines that AI companies must not cross.
There are other interesting insights to be gleaned from looking at the bills all together. Two-thirds of the AI bills are sponsored by Democrats. This isn’t too surprising, since some House Republicans have claimed to want no AI regulations, believing that guardrails will slow down progress.
The topics of the bills (as specified by Congress) are dominated by science, tech, and communications (28%), commerce (22%), updating government operations (18%), and national security (9%). Topics that don’t receive much attention include labor and employment (2%), environmental protection (1%), and civil rights, civil liberties, and minority issues (1%).
The lack of a focus on equity and minority issues came into view during the Senate markup session at the end of July. Senator Ted Cruz, a Republican, added an amendment that explicitly prohibits any action “to ensure inclusivity and equity in the creation, design, or development of the technology.” Cruz said regulatory action might slow US progress in AI, allowing the country to fall behind China.
On the House side, there was also a hesitation to work on bills dealing with biases in AI models. “None of our bills are addressing that. That’s one of the more ideological issues that we’re not moving forward on,” says Vaughan.
The lead Democrat on the House committee, Representative Zoe Lofgren, told MIT Technology Review, “It is surprising and disappointing if any of my Republican colleagues have made that comment about bias in AI systems. We shouldn’t tolerate discrimination that’s overt and intentional any more than we should tolerate discrimination that occurs because of bias in AI systems. I’m not really sure how anyone can argue against that.”
After publication, Vaughan clarified that “[Bias] is one of the bigger, more cross-cutting issues, unlike the narrow, practical bills we considered that week. But we do care about bias as an issue,” and she expects it to be addressed within an upcoming House Task Force report.
One issue that may rise above the partisan divide is deepfakes. The Defiance Act, one of several bills addressing them, is cosponsored by a Democratic senator, Amy Klobuchar, and a Republican senator, Josh Hawley. Deepfakes have already been abused in elections; for example, someone faked Joe Biden’s voice for a robocall to tell citizens not to vote. And the technology has been weaponized to victimize people by incorporating their images into pornography without their consent.
“I certainly think that there is more bipartisan support for action on these issues than on many others,” says Daniel Weiner, director of the Brennan Center’s Elections & Government Program. “But it remains to be seen whether that’s going to win out against some of the more traditional ideological divisions that tend to arise around these issues.”
Although none of the current slate of bills have resulted in laws yet, the task of regulating any new technology, and specifically advanced AI systems that no one entirely understands, is difficult. The fact that Congress is making any progress at all may be surprising in itself.
“Congress is not sleeping on this by any stretch of the means,” says Stevens. “We are evaluating and asking the right questions and also working alongside our partners in the Biden-Harris administration to get us to the best place for the harnessing of artificial intelligence.”
Update: We added further comments from the Republican spokesperson.
As a lifelong disabled person who constantly copes with multiple conditions, I have a natural tendency to view emerging technologies with skepticism. Most new things are built for the majority of people—in this case, people without disabilities—and the truth of the matter is there’s no guarantee I’ll have access to them.
There are certainly exceptions to the rule. A prime example is the iPhone. Although discrete accessibility software did not appear until the device’s third-generation model, in 2009, earlier generations were still revolutionary for me. After I’d spent years using flip phones with postage-stamp-size screens and hard-to-press buttons, the fact that the original iPhone had a relatively large screen and a touch-based UI was accessibility unto itself.
AI could make these kinds of jumps in accessibility more common across a wide range of technologies. But you probably haven’t heard much about that possibility. While the New York Times sues OpenAI over ChatGPT’s scraping of its content and everyone ruminates over the ethics of AI tools, there seems to be less consideration of the good ChatGPT can do for people of various abilities. For someone with visual and motor delays, using ChatGPT to do research can be a lifesaver. Instead of trying to manage a dozen browser tabs with Google searches and other pertinent information, you can have ChatGPT collate everything into one space. Likewise, it’s highly plausible that artists who can’t draw in the conventional manner could use voice prompts to have Midjourney or Adobe Firefly create what they’re thinking of. That might be the only way for such a person to indulge an artistic passion.
For those who, like me, are blind or have low vision, the ability to summon a ride on demand and go anywhere without imposing on anyone else for help is a huge deal.
Of course, data needs to be vetted for accuracy and gathered with permission—there are ample reasons to be wary of AI’s potential to serve up wrong or potentially harmful, ableist information about the disabled community. Still, it feels unappreciated (and underreported) that AI-based software can truly be an assistive technology, enabling people to do things they otherwise would be excluded from. AI could give a disabled person agency and autonomy. That’s the whole point of accessibility—freeing people in a society not designed for their needs.
The ability to automatically generate video captions and image descriptions provides additional examples of how automation can make computers and productivity technology more accessible. And more broadly, it’s hard not to be enthused about ever-burgeoning technologies like autonomous vehicles. Most tech journalists and other industry watchers are interested in self-driving cars for the sheer novelty, but the reality is the AI software behind vehicles like Waymo’s fleet of Jaguar SUVs is quite literally enabling many in the disability community to exert more agency over their transport. For those who, like me, are blind or have low vision, the ability to summon a ride on demand and go anywhere without imposing on anyone else for help is a huge deal. It’s not hard to envision a future in which, as the technology matures, autonomous vehicles are normalized to the point where blind people could buy their own cars.
At the same time, AI is enabling serious advances in technology for people with limb differences. How exciting will it be, decades from now, to have synthetic arms and legs, hands or feet, that more or less function like the real things? Similarly, the team at Boston-based Tatum Robotics is combining hardware with AI to make communication more accessible for deaf-blind people: A robotic hand forms hand signs, or words in American Sign Language that can be read tactilely against the palm. Like autonomous vehicles, these applications have enormous potential to positively influence the everyday lives of countless people. All this goes far beyond mere chatbots.
It should be noted that disabled people historically have been among the earliest adopters of new technologies. AI is no different, yet public discourse routinely fails to meaningfully account for this. After all, AI plays to a computer’s greatest strength: automation. As time marches on, the way AI grows and evolves will be unmistakably and indelibly shaped by disabled people and our myriad needs and tolerances. It will offer us more access to information, to productivity, and most important, to society writ large.
Steven Aquino is a freelance tech journalist covering accessibility and assistive technologies. He is based in San Francisco.
The US Department of Homeland Security (DHS) plans to collect and analyze photos of the faces of migrant children at the border in a bid to improve facial recognition technology, MIT Technology Review can reveal. This includes children “down to the infant,” according to John Boyd, assistant director of the department’s Office of Biometric Identity Management (OBIM), where a key part of his role is to research and develop future biometric identity services for the government.
As Boyd explained at a conference in June, the key question for OBIM is, “If we pick up someone from Panama at the southern border at age four, say, and then pick them up at age six, are we going to recognize them?”
Facial recognition technology (FRT) has traditionally not been applied to children, largely because training data sets of real children’s faces are few and far between, and consist of either low-quality images drawn from the internet or small sample sizes with little diversity. Such limitations reflect the significant sensitivities regarding privacy and consent when it comes to minors.
In practice, the new DHS plan could effectively solve that problem. According to Syracuse University’s Transactional Records Access Clearinghouse (TRAC), 339,234 children arrived at the US-Mexico border in 2022, the last year for which numbers are currently available. Of those children, 150,000 were unaccompanied—the highest annual number on record. If the face prints of even 1% of those children had been enrolled in OBIM’s craniofacial structural progression program, the resulting data set would dwarf nearly all existing data sets of real children’s faces used for aging research.
It’s unclear to what extent the plan has already been implemented; Boyd tells MIT Technology Review that to the best of his knowledge, the agency has not yet started collecting data under the program, but he adds that as “the senior executive,” he would “have to get with [his] staff to see.” He could only confirm that his office is “funding” it. Despite repeated requests, Boyd did not provide any additional information.
Boyd says OBIM’s plan to collect facial images from children under 14 is possible due to recent “rulemaking” at “some DHS components,” or sub-offices, that have removed age restrictions on the collection of biometric data. US Customs and Border Protection (CBP), the US Transportation Security Administration, and US Immigration and Customs Enforcement declined to comment before publication. US Citizenship and Immigration Services (USCIS) did not respond to multiple requests for comment. OBIM referred MIT Technology Review back to DHS’s main press office.
DHS did not comment on the program prior, but sent an emailed statement following publication: “The Department of Homeland Security uses various forms of technology to execute its mission, including some biometric capabilities. DHS ensures all technologies, regardless of type, are operated under the established authorities and within the scope of the law. We are committed to protecting the privacy, civil rights, and civil liberties of all individuals who may be subject to the technology we use to keep the nation safe and secure.”
Boyd spoke publicly about the plan in June at the Federal Identity Forum and Exposition, an annual identity management conference for federal employees and contractors. But close observers of DHS that we spoke with—including a former official, representatives of two influential lawmakers who have spoken out about the federal government’s use of surveillance technologies, and immigrants’ rights organizations that closely track policies affecting migrants—were unaware of any new policies allowing biometric data collection of children under 14.
That is not to say that all of them are surprised. “That tracks,” says one former CBP official who has visited several migrant processing centers on the US-Mexico border and requested anonymity to speak freely. He says “every center” he visited “had biometric identity collection, and everybody was going through it,” though he was unaware of a specific policy mandating the practice. “I don’t recall them separating out children,” he adds.
“The reports of CBP, as well as DHS more broadly, expanding the use of facial recognition technology to track migrant children is another stride toward a surveillance state and should be a concern to everyone who values privacy,” Justin Krakoff, deputy communications director for Senator Jeff Merkley of Oregon, said in a statement to MIT Technology Review. Merkley has been an outspoken critic of both DHS’s immigration policies and of government use of facial recognition technologies.
Beyond concerns about privacy, transparency, and accountability, some experts also worry about testing and developing new technologies using data from a population that has little recourse to provide—or withhold—consent.
Could consent “actually take into account the vast power differentials that are inherent in the way that this is tested out on people?” asks Petra Molnar, author of The Walls Have Eyes: Surviving Migration in the Age of AI. “And if you arrive at a border … and you are faced with the impossible choice of either: get into a country if you give us your biometrics, or you don’t.”
“That completely vitiates informed consent,” she adds.
This question becomes even more challenging when it comes to children, says Ashley Gorski, a senior staff attorney with the American Civil Liberties Union. DHS “should have to meet an extremely high bar to show that these kids and their legal guardians have meaningfully consented to serve as test subjects,” she says. “There’s a significant intimidation factor, and children aren’t as equipped to consider long-term risks.”
Murky new rules
The Office of Biometric Identity Management, previously known as the US Visitor and Immigrant Status Indicator Technology Program (US-VISIT), was created after 9/11 with the specific mandate of collecting biometric data—initially only fingerprints and photographs—from all non-US citizens who sought to enter the country.
Since then, DHS has begun collecting face prints, iris and retina scans, and even DNA, among other modalities. It is also testing new ways of gathering this data—including through contactless fingerprint collection, which is currently deployed at five sites on the border, as Boyd shared in his conference presentation.
Since 2023, CBP has been using a mobile app, CBP One, for asylum seekers to submit biometric data even before they enter the United States; users are required to take selfies periodically to verify their identity. The app has been riddled with problems, including technical glitches and facial recognition algorithms that are unable to recognize darker-skinned people. This is compounded by the fact that not every asylum seeker has a smartphone.
Then, just after crossing into the United States, migrants must submit to collection of biometric data, including DNA. For a sense of scale, a recent report from Georgetown Law School’s Center on Privacy and Technology found that CBP has added 1.5 million DNA profiles, primarily from migrants crossing the border, to law enforcement databases since it began collecting DNA “from any person in CBP custody subject to fingerprinting” in January 2020. The researchers noted that an overrepresentation of immigrants—the majority of whom are people of color—in a DNA database used by law enforcement could subject them to over-policing and lead to other forms of bias.
Generally, these programs only require information from individuals aged 14 to 79. DHS attempted to change this back in 2020, with proposed rules for USCIS and CBP that would have expanded biometric data collection dramatically, including by age. (USCIS’s proposed rule would have doubled the number of people from whom biometric data would be required, including any US citizen who sponsors an immigrant.) But the USCIS rule was withdrawn in the wake of the Biden administration’s new “priorities to reduce barriers and undue burdens in the immigration system.” Meanwhile, for reasons that remain unclear, the proposed CBP rule was never enacted.
This would make it appear “contradictory” if DHS were now collecting the biometric data of children under 14, says Dinesh McCoy, a staff attorney with Just Futures Law, an immigrant rights group that tracks surveillance technologies.
Neither Boyd nor DHS’s media office would confirm which specific policy changes he was referring to in his presentation, though MIT Technology Review has identified a 2017 memo, issued by then-Secretary of Homeland Security John F. Kelly, that encouraged DHS components to remove “age as a basis for determining when to collect biometrics.”
The DHS’s Office of the Inspector General (OIG) referred to this memo as the “overarching policy for biometrics at DHS” in a September 2023 report, though none of the press offices MIT Technology Review contacted—including the main DHS press office, OIG, and OBIM, among others—would confirm whether this was still the relevant policy; we have not been able to confirm any related policy changes since then.
The OIG audit also found a number of fundamental issues related to DHS’s oversight of biometric data collection and use—including that its 10-year strategic framework for biometrics, covering 2015 to 2025, “did not accurately reflect the current state of biometrics across the Department, such as the use of facial recognition verification and identification.” Nor did it provide clear guidance for the consistent collection and use of biometrics across DHS, including age requirements.
But there is also another potential explanation for the new OBIM program: Boyd says it is being conducted under the auspices of the DHS’s undersecretary of science and technology, the office that leads much of the agency’s research efforts. Because it is for research, rather than to be used “in DHS operations to inform processes or decision making,” many of the standard restrictions for DHS use of face recognition and face capture technologies do not apply, according to a DHS directive.
Do you have any additional information on DHS’s craniofacial structural progression initiative? Please reach out with a non-work email to tips@technologyreview.com or securely on Signal at 626.765.5489.
Some lawyers allege that changing the age limit for data collection via department policy, not by a federal rule, which requires a public comment period, is problematic. McCoy, for instance, says any lack of transparency here amplifies the already “extremely challenging” task of “finding [out] in a systematic way how these technologies are deployed”—even though that is key for accountability.
Advancing the field
At the identity forum and in a subsequent conversation, Boyd explained that this data collection is meant to advance the development of effective FRT algorithms. Boyd leads OBIM’s Future Identity team, whose mission is to “research, review, assess, and develop technology, policy, and human factors that enable rapid, accurate, and secure identity services” and to make OBIM “the preferred provider for identity services within DHS.”
Driven by high-profile cases of missing children, there has long been interest in understanding how children’s faces age. At the same time, there have been technical challenges to doing so, both preceding FRT and with it.
At its core, facial recognition identifies individuals by comparing the geometry of various facial features in an original face print with subsequent images. Based on this comparison, a facial recognition algorithm assigns a percentage likelihood that there is a match.
But as children grow and develop, their bone structure changes significantly, making it difficult for facial recognition algorithms to identify them over time. (These changes tend to be even more pronounced in children under 14. In contrast, as adults age, the changes tend to be in the skin and muscle, and have less variation overall.) More data would help solve this problem, but there is a dearth of high-quality data sets of children’s faces with verifiable ages.
“What we’re trying to do is to get large data sets of known individuals,” Boyd tells MIT Technology Review. That means taking high-quality face prints “under controlled conditions where we know we’ve got the person with the right name [and] the correct birth date”—or, in other words, where they can be certain about the “provenance of the data.”
For example, one data set used for aging research consists of 305 celebrities’ faces as they aged from five to 32. But these photos, scraped from the internet, contain too many other variables—such as differing image qualities, lighting conditions, and distances at which they were taken—to be truly useful. Plus, speaking to the provenance issue that Boyd highlights, their actual ages in each photo can only be estimated.
Another tactic is to use data sets of adult faces that have been synthetically de-aged. Synthetic data is considered more privacy-preserving, but it too has limitations, says Stephanie Schuckers, director of the Center for Identification Technology Research (CITeR). “You can test things with only the generated data,” Schuckers explains, but the question remains: “Would you get similar results to the real data?”
(Hosted at Clarkson University in New York, CITeR brings together a network of academic and government affiliates working on identity technologies. OBIM is a member of the research consortium.)
Schuckers’s team at CITeR has taken another approach: an ongoing longitudinal study of a cohort of 231 elementary and middle school students from the area around Clarkson University. Since 2016, the team has captured biometric data every six months (save for two years of the covid-19 pandemic), including facial images. They have found that the open-source face recognition models they tested can in fact successfully recognize children three to four years after they were initially enrolled.
But the conditions of this study aren’t easily replicable at scale. The study images are taken in a controlled environment, all the participants are volunteers, the researchers sought consent from parents and the subjects themselves, and the research was approved by the university’s Institutional Review Board. Schuckers’s research also promises to protect privacy by requiring other researchers to request access, and by providing facial datasets separately from other data that have been collected.
What’s more, this research still has technical limitations, including that the sample is small, and it is overwhelmingly Caucasian, meaning it might be less accurate when applied to other races.
Schuckers says she was unaware of DHS’s craniofacial structural progression initiative.
Far-reaching implications
Boyd says OBIM takes privacy considerations seriously, and that “we don’t share … data with commercial industries.” Still, OBIM has 144 government partners with which it does share information, and it has been criticized by the Government Accountability Office for poorly documenting who it shares information with, and with what privacy-protecting measures.
Even if the data does stay within the federal government, OBIM’s findings regarding the accuracy of FRT for children over time could neverthelessinfluence how—and when—the rest of the government collects biometric data, as well as whether the broader facial recognition industry may also market its services for children. (Indeed, Boyd says sharing “results,” or the findings of how accurate FRT algorithms are, is different than sharing the data itself.)
That this technology is being tested on people who are offered fewer privacy protections than would be afforded to US citizens is just part of the wider trend of using people from the developing world, whether they are migrants coming to the border or civilians in war zones, to help improve new technologies.
In fact, Boyd previously helped advance the Department of Defense’s biometric systems in Iraq and Afghanistan, where he acknowledged that individuals lacked the privacy protections that would have been granted in many other contexts, despite the incredibly high stakes. Biometric data collected in those war zones—in some areas, from every fighting-age male—was used toidentify and target insurgents, and being misidentified could mean death.
These projects subsequently played a substantial role in influencing the expansion of biometric data collection by the Department of Defense, which now happens globally. And architects of the program, like Boyd, have taken important roles in expanding the use of biometrics at other agencies.
“It’s not an accident” that this testing happens in the context of border zones, says Molnar. Borders are “the perfect laboratory for tech experimentation, because oversight is weak, discretion is baked into the decisions that get made … it allows the state to experiment in ways that it wouldn’t be allowed to in other spaces.”
But, she notes, “just because it happens at the border doesn’t mean that that’s where it’s going to stay.”
Update: This story was updated to include comment from DHS.
Do you have any additional information on DHS’s craniofacial structural progression initiative? Please reach out with a non-work email to tips@technologyreview.com or securely on Signal at 626.765.5489.
Forgive us for using a technical term, but it’s absolutely bananas how far AI has come in the past year. And while some people are worried that the robots are here to take our jobs, we’re convinced they’re here to help us do our jobs better—especially when it comes to AI marketing tools.
If you are a small team or solopreneur wanting to scale your reach and analytics capability, AI marketing tools can be a game-changer. With the right software, companies with minimal resources can compete with the big boys.
AI marketing software can help draft content, generate SEO-friendly social captions, repurpose a blog post into a video, suggest an ad strategy, and so much more. Seriously, just imagine what your marketing team could accomplish with a little (digital) helping hand.
Ready to step up your social media marketing with a little help from artificial intelligence? Read on for our master list of the best AI marketing tools for 2024.
Bonus: Download this free cheat sheet with 250 professionally engineered ChatGPT prompts that will help you be more productive at work and in your daily life. We’ve included specific prompts for over 25 industries and professions!
25 best AI marketing tools for 2024
A quick caveat before we dive into our list of the best AI marketing tools for 2024: the “best” software for you is going to be totally dependent on your social media goals.
So before you lean into the world of automation and adopt every tool here, take a minute to think about what parts of your current process could use scaling or streamlining.
Managing social media = 25 jobs. Creating content, posting to multiple networks, learning about your audience, providing customer support, reporting wins to your boss… It’s a lot.
Integrations with all of your other tools, including Canva, Hubspot, Shopify, Mailchimp, Microsoft Dynamics, and 200+ more
But hey, this is a story about AI marketing tools… and Hootsuite absolutely delivers there, too.
OwlyWriter is built on ChatGPT’s pioneering language model, but it also includes all of our winning content formulas that took over 14 years of research to develop.
Craft perfect posts in seconds
OwlyWriter AI instantly generates captions and content ideas for every social media network. It’s seriously easy.
If you’re fine with writing your own captions but need a little help generating relevant hashtags, Hootsuite’s AI hashtag suggestion tool is a game-changer.
We all know hashtags are kind of a secret weapon when it comes to expanding your reach, but coming up with the right hashtags on your own can be…tricky.
Hootsuite’s AI suggestion tool is built right into the Compose window, so it’s super easy to generate smart hashtags while you write and schedule your content. Our AI technology analyzes both your caption and the images you’ve uploaded to suggest the most relevant hashtags.
And if you’re handling customer service via social media, Hootsuite Inbox also features some powerful AI capabilities. With Hootsuite Inbox, you can bridge the gap between social media engagement and customer service — and manage all of your social media messages in one place.
Beyond collecting all your DMs into a handy package, Inbox also features:
Price: Starts at $99 for professional plans Who it’s best for: Social media managers, digital marketing professionals, small to large businesses, content creators
Reduce response time (and your workload)
Manage all your messages stress-free with easy routing, saved replies, and friendly chatbots. Try Hootsuite’s Inbox today.
Caption generator. Curious about OwlyWriter AI or other AI writing tools? Try out caption generation for free. This generator isn’t as advanced as OwlyWriter. Still, unlike ChatGPT, it’s optimized for social media and custom-creates your caption for your chosen platform in one of five languages. Nice.
Username idea generator. Identity crisis? No problem! With Hootsuite’s username generator, you can reinvent yourself in a few seconds flat.
About/bio writer. We get it—it can feel awkward to write about yourself. Let Hootsuite sum up the special sauce of your life automatically.
AI hashtag generator. Not sure how to tag your latest Insta post for maximum exposure? Our hashtag tool is here to help. (And if you want suggestions based on an image or video? Sign up for Hootsuite for the full multimedia experience!)
Price: Free!
Who it’s best for: Content creators, small teams, businesses just beginning their social media journeys
Does ChatGPT even need an introduction? It’s become the default name for “AI content creation.”
A natural language processing (NLP) chatbot, ChatGPT can comprehend and generate material that sounds like a real person wrote it, including blog posts, social media postings and more.
Price: Free! The “ChatGPT+” subscription costs $20 a month and offers faster response times and early access to new features
Who it’s best for: Developers, businesses with AI integration needs, content creators, customer support teams
Dall-E is the visual sibling of ChatGPT, an AI-powered solution for graphic design. Provide a text prompt detailing your visual concept, and Dall-E brings it to life.
Dall-E is great for crafting original graphics for blog posts or social media, prototyping designs, or developing web graphics.
It’s not the best for creating brand elements like logos or packaging designs, though. Brand identity, after all, calls for more than just aesthetics.
Price: Dall-e charges by the image, with the pricing ranging between $0.016 an image to $0.040 depending on the scale and resolution.
Who it’s best for: Graphic designers, artists, creatives, marketing professionals
Copy.ai streamlines the copywriting process by generating almost-ready-to-publish drafts requiring minimal human editing.
Just provide a topic and creative direction; you’ll get outlines, articles, social posts, and sales emails in a few seconds. Try this AI marketing tool out if you need to generate written content rapidly at scale.
Price: The basic plan is free, with pro plans starting at $36 a month
Who it’s best for: Content writers, copywriters, marketers, small business owners
A major ChatGPT competitor, JasperAI’s specialty is its ability to account for tone. That makes it a great tool for generating on-brand articles, social media posts, and scripts that consistently reflect your voice.
JasperAI also facilitates content translation into 30 languages and integrates with tons of different applications.
Try using JasperAI to generate content ideas and outlines or translate content for global audiences. (That being said, leave the big stories to human professionals.)
Price: From $39 a month
Who it’s best for: Individual content creators, collaborative marketing teams
Creating a brand from scratch? Work with a graphic design professional. Looking to repurpose existing content and maintain visual consistency using a library of graphic templates? Canva is a great tool to keep in your toolbox.
Canva isn’t just for social graphics; it covers document design, presentations, and more.
Plus, with the introduction of Magic Design, Canva utilizes AI-powered content creation to generate matching templates based on uploaded media, making it efficient to repurpose content across various platforms.
Price: A basic account is free, but to access premium content and new tools (like Magic Design), prices start at $18.99 a month
Who it’s best for: Designers, small business owners, social media managers, students
Similar to Dall-E, Midjourney creates graphics using AI prompts. It operates as a Discord bot, which means it’s very user-friendly, and you can enable art creation from anywhere—even your phone.
While anyone can learn to create prompts in Midjourney, getting the results you want takes a bit of practice—check out our AI art prompting guide.
Advanced Midjourney options include specifying features like a transparent background, aspect ratio, and art style.
Price: From $10 a month
Who it’s best for: Advanced AI image users, graphic designers
Synthesia enhances video production, even if you’re the most camera-shy marketing person on the planet. It uses AI avatars to quickly transform video scripts into finished videos.
The platform offers industry-specific templates that facilitate quick script creation. Plus, users can review and edit the video before publishing, all within the software.
Use it to create quick how-to tutorials at scale or generate clips for Reels and TikTok.
Price: Starting from $22 a month
Who it’s best for: Video editors, marketing professionals
Murf specializes in generating real, human-sounding voices for reading scripts. The platform’s AI voices are created from real people, allowing you to produce studio-quality audio in 20 languages.
You can even clone your own voice for authenticity and time savings. It’s a great tool for recording voiceovers for social videos, podcasts, or brand presentations (though, as with many AI tools, it’s not the best for long-term branding).
Price: The entry level plan is free, with more advanced tools available starting from $19 a month
Who it’s best for: Teams interested in multimedia production
Looking into video or audio podcasts? Podcastle might be for you. This AI-powered podcast recording and editing app allows you to record video and audio streams, including virtual interviews with up to 10 participants.
Podcastle’s AI-powered editing automates tasks like cutting out silence, suggesting clip trims, and minimizing noise while maintaining consistent volume levels.
In other words, this is a great option if you’re starting a new podcast and need easy audio and video editing tools (and a fancy voice-cloning feature). If you’re more experienced in the podcast world, though, this might be too basic for your needs.
Price: Basic plans are free, with more advanced functionality available from $12.99 a month
QuillBot differentiates itself by helping you rephrase existing content creatively. It goes beyond basic synonym changes—QuillBot can simplify or expand content for brevity or detail.
It also offers unique extensions, including a web research AI search tool, a citation generator, and a “co-writing” sentence completer.
Use Quillbot to repurpose content or generate multiple versions of the same story.
Price: Some functions available for free, but to access all the tools, plans start at $8.33 a month
Who it’s best for: Social media managers looking to repurpose existing content
Wanna level up your product pics? Magic Studio is your new go-to. This AI tool caters to brands aiming to elevate their visual presence.
Magic Studio helps instantly place your best-sellers on sleek backgrounds, remove unwanted objects, and generate photos based on text prompts. It’s also a great tool for creating all-star profile pics.
Price: Some functions are free, but pro plans start from $7.49 a month
DeepBrain AI Studios offers a user-friendly platform for AI video creation, converting text to video seamlessly. (It’s honestly a little spooky.)
Thanks to customizable photo-realistic AI avatars, the intuitive tool empowers beginners to create high-quality videos without actors, filming teams, or expensive equipment.
Price: From $29 a month
Who it’s best for: Video editors, brands with YouTube channels
Acrolinx goes beyond just churning out content for blog posts. It serves as a brand watchdog, ensuring content aligns with brand guidelines. Set your style, tone, grammar, and specific language, and Acrolinx will help you generate content that never deviates from the brand vision.
Its AI Video Generator converts text to video, and photo-realistic AI avatars can be tailored to suit your brand.
Price: Price available upon request
Who it’s best for: Enterprise brands, marketing professionals
Keyword Insights is a robust SEO tool that features an advanced AI writing assistant specifically tailored for contemporary content creators.
The integrated platform seamlessly combines content research, writing and search optimization. The writing assistant within Keyword Insights offers AI functionality while fostering a touch of human-AI collaboration.
Price: Access to some SEO tools are free, with a basic plan starting at $58 a month
Who it’s best for: SEO professionals, content marketers
Surfer SEO is a tool designed to enhance the content quality of web pages, ensuring higher rankings on search engine results pages (SERPs).
It meticulously analyzes SERPs for relevant search terms and compares your content against the insights gleaned from top-ranking pages. Surfer SEO evaluates keywords and various ranking metrics, providing you with valuable suggestions to optimize your content for search engine optimization. It also offers an outline generator and keyword research tools to help you create SEO-friendly content from the outset.
Price: Starting from $96.39 a month
Who it’s best for: Agencies, brands ready to scale
GrowthBar employs GPT-3 AI technology to automate content generation and offers suggestions for keywords, precise word count, links, images, and more.
This tool excels in providing comprehensive backlinking strategies and creating outlines for blog post content. Bonus: Growthbar offers a Chrome extension for added convenience.
Frase.io is a handy tool for creating SEO-optimized content efficiently.
Just enter a topic, and Frase automatically compares and extracts data from top sites using the same keyword. The AI-driven marketing tool then generates an SEO-friendly outline, allowing you to create content that’s more likely to rank in the SERPs.
Price: From $15 a month
Who it’s best for: Editors, content strategists, SEO experts
Albert positions itself as your “self-learning digital marketing ally,” with features that help it process and analyze audience info and tactical data at scale.
Albert is adept at optimizing and generating budgets. But it’s also designed to help with strategy and structure, so you can get more reach for less.
Price: Contact for a customized pricing plan Who it’s best for: Digital marketers
If you’re in marketing in 2024, chances are you’re delivering to a bunch of different outlets and platforms. Skai is all about optimizing for an omnichannel strategy.
The tool uses AI-powered tech to collect and classify unstructured data points to gather insights about your unique market. Skai’s Creative Intelligence AI will even review your ad creative to offer feedback and tactics.
Wordstream is another AI-based program dedicated to elevating your advertising game. It uses machine learning (ML) to optimize ad campaigns across a variety of social media networks.
Wordstream’s key features include ML-driven ad performances, cross-channel assessment of PPC ads, and comprehensive campaign analysis. It’s helpful whether you’re fine-tuning existing social media ads or crafting a new campaign from the ground up.
Brandwatch is notable for its extensive reservoir of customer intelligence, specifically designed for analyzing data at scale.
Its AI capabilities generate valuable insights, statistics, and aggregated data. This streamlined approach allows users to spend less time deciphering data and more time implementing high-level takeaways.
One noteworthy feature: It can integrate with tools like ChatGPT, facilitating the production of natural language summaries for data sets.
Hootsuite users can take advantage of Hootsuite Insights powered by Brandwatch directly within the app, integrating Brandwatch’s search capabilities for a strategic advantage over competitors.
Price: Available with Hootsuite’s Enterprise accounts (request a demo now!)
Who it’s best for: Digital strategists, brand managers
Brand24’s AI social media monitoring tool allows brands to stay on top of real-time feedback, both positive and negative.
It conducts a comprehensive analysis of conversations spanning the web regarding the brand, products, and competitors so you can be as informed as humanly possible about the state of your reputation.
Beyond reputation management, Brand24 helps you assess ongoing marketing campaigns and resolve emerging issues before they escalate.
Price: From $99 a month Who it’s best for: Brand managers, social media managers
Optimove is a comprehensive customer data platform offering a unified view of customer behavior and insights.
Optimove features include campaign performance evaluation, hyper-segmentation, A/B testing, and multi-channel tracking. The tool can provide valuable insights, aiding in decisions about campaign optimization and managing customer exposure to marketing emails.
Price: Determined on the number of customers you have; get in touch for a quote
Who it’s best for: Customer relationship management professionals
Save time managing your social media presence with Hootsuite. From a single dashboard you can publish and schedule posts, find relevant conversions, engage the audience, measure results, and more. Try it free today.