Roundtables: Why It’s So Hard to Make Welfare AI Fair

Amsterdam tried using algorithms to fairly assess welfare applicants, but bias still crept in. Why did Amsterdam fail? And more important, can this ever be done right? Hear from MIT Technology Review editor Amanda Silverman, investigative reporter Eileen Guo, and Lighthouse Reports investigative reporter Gabriel Geiger as they explore if algorithms can ever be fair.

Speakers: Eileen Guo, features & investigations reporter, Amanda Silverman, features & investigations editor, and Gabriel Geiger investigative reporter at Lighthouse Reports

Recorded on July 30, 2025

Related Coverage:

Roundtables: Inside OpenAI’s Empire with Karen Hao

Recorded on June 30, 2025

AI journalist Karen Hao’s book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, tells the story of OpenAI’s rise to power and its far-reaching impact all over the world. Hear from Karen Hao, former MIT Technology Review senior editor, and executive editor Niall Firth for a conversation exploring the AI arms race, what it means for all of us, and where it’s headed.

Speakers: Karen Hao, AI journalist, and Niall Firth, executive editor.

Related Coverage:

Roundtables: A New Look at AI’s Energy Use

Wednesday, May 21, 2025

Big Tech’s appetite for energy is growing rapidly as adoption of AI accelerates. But just how much energy does even a single AI query use? And what does it mean for the climate? Join editor in chief Mat Honan, senior climate reporter Casey Crownhart, and AI reporter James O’Donnell for a conversation exploring AI’s energy demands now and in the future.

Going live on May 21st at 18:30 GMT / 1:30 PM ET / 10:30 AM PT

Related Content:

Speakers

Mat Honan
Editor in Chief
Casey Crownhart
Climate Reporter
James O'Donnell, AI reporter
James O’Donnell
AI Reporter
3 Things Caiwei Chen is into right now

A new play about OpenAI

I recently saw Doomers, a new play by Matthew Gasda about the aborted 2023 coup at OpenAI, here represented by a fictional company called MindMesh. The action is set almost entirely in a meeting room; the first act follows executives immediately after the firing of company CEO Seth (a stand-in for Sam Altman), and the second re-creates the board negotiations that determined his fate. It’s a solid attempt to capture the zeitgeist of Silicon Valley’s AI frenzy and the world’s moral panic over artificial intelligence, but the rapid-fire, high-stakes exchanges mean it sometimes seems to get lost in its own verbosity.

Themed dinner parties and culinary experiments

The vastness of Chinese cuisine defies easy categorization, and even in a city with no shortage of options, I often find myself cookingnot just to recapture something closer to home, but to create a home unlike one that ever existed. Recently, I’ve been experimenting with a Chinese take on the charcuterie boardpairing toasted steamed buns, called mantou, with furu, a fermented tofu spread that is sharp, pungent, and full of umami.

Sewing and copying my own clothes

I started sewing three years ago, but only in the past year have I begun making clothes from scratch. As a lover of vintage fashionespecially ’80s silhouettesI started out with old patterns I found on Etsy. But recently, I tried something new: copying a beloved dress I bought in a thrift store in Beijing years ago. Doing this is quite literally a process of reverse-engineering—­pinning the garment down, tracing its seams, deconstructing its logic, and rebuilding it. At times my brain feels like an old Mac hitting its CPU limit. But when it works, it feels like a small act of magic. It’s an exercise in certainty, the very thing that drew me to fashion in the first placea chance to inhabit something that feels like an extension of myself.

Roundtables: Brain-Computer Interfaces: From Promise to Product

Recorded on April 23, 2025

Brain-Computer Interfaces: From Promise to Product

Speakers: David Rotman, editor at large, and Antonio Regalado, senior editor for biomedicine.

Brain-computer interfaces (BCIs) have been crowned the 11th Breakthrough Technology of 2025 by MIT Technology Review‘s readers. BCIs are electrodes implanted into the brain to send neural commands to computers, primarily to assist paralyzed people. Hear from MIT Technology Review editor at large David Rotman and senior editor for biomedicine Antonio Regalado as they explore the past, present, and future of BCIs.

Related Coverage

The quest to build islands with ocean currents in the Maldives

In satellite images, the 20-odd coral atolls of the Maldives look something like skeletal remains or chalk lines at a crime scene. But these landforms, which circle the peaks of a mountain range that has vanished under the Indian Ocean, are far from inert. They’re the products of living processes—places where coral has grown toward the surface over hundreds of thousands of years. Shifting ocean currents have gradually pushed sand—made from broken-up bits of this same coral—into more than 1,000 other islands that poke above the surface. 

But these currents can also be remarkably transient, constructing new sandbanks or washing them away in a matter of weeks. In the coming decades, the daily lives of the half-million people who live on this archipelago—the world’s lowest-lying nation—will depend on finding ways to keep a solid foothold amid these shifting sands. More than 90% of the islands have experienced severe erosion, and climate change could make much of the country uninhabitable by the middle of the century.

Off one atoll, just south of the Maldives’ capital, Malé, researchers are testing one way to capture sand in strategic locations—to grow islands, rebuild beaches, and protect coastal communities from sea-level rise. Swim 10 minutes out into the En’boodhoofinolhu Lagoon and you’ll find the Ramp Ring, an unusual structure made up of six tough-skinned geotextile bladders. These submerged bags, part of a recent effort called the Growing Islands project, form a pair of parentheses separated by 90 meters (around 300 feet).

The bags, each about two meters tall, were deployed in December 2024, and by February, underwater images showed that sand had climbed about a meter and a half up the surface of each one, demonstrating how passive structures can quickly replenish beaches and, in time, build a solid foundation for new land. “There’s just a ton of sand in there. It’s really looking good,” says Skylar Tibbits, an architect and founder of the MIT Self-Assembly Lab, which is developing the project in partnership with the Malé-based climate tech company Invena.

The Self-Assembly Lab designs material technologies that can be programmed to transform or “self-assemble” in the air or underwater, exploiting natural forces like gravity, wind, waves, and sunlight. Its creations include sheets of wood fiber that form into three-dimensional structures when splashed with water, which the researchers hope could be used for tool-free flat-pack furniture. 

Growing Islands is their largest-scale undertaking yet. Since 2017, the project has deployed 10 experiments in the Maldives, testing different materials, locations, and strategies, including inflatable structures and mesh nets. The Ramp Ring is many times larger than previous deployments and aims to overcome their biggest limitation. 

In the Maldives, the direction of the currents changes with the seasons. Past experiments have been able to capture only one seasonal flow, meaning they lie dormant for months of the year. By contrast, the Ramp Ring is “omnidirectional,” capturing sand year-round. “It’s basically a big ring, a big loop, and no matter which monsoon season and which wave direction, it accumulates sand in the same area,” Tibbits says.

The approach points to a more sustainable way to protect the archipelago, whose growing population is supported by an economy that caters to 2 million annual tourists drawn by its white beaches and teeming coral reefs. Most of the country’s 187 inhabited islands have already had some form of human intervention to reclaim land or defend against erosion, such as concrete blocks, jetties, and breakwaters. Since the 1990s, dredging has become by far the most significant strategy. Boats equipped with high-power pumping systems vacuum up sand from one part of the seabed and spray it into a pile somewhere else. This temporary process allows resort developers and densely populated islands like Malé to quickly replenish beaches and build limitlessly customizable islands. But it also leaves behind dead zones where sand has been extracted—and plumes of sediment that cloud the water with a sort of choking marine smog. Last year, the government placed a temporary ban on dredging to prevent damage to reef ecosystems, which were already struggling amid spiking ocean temperatures.

Holly East, a geographer at the University of Northumbria, says Growing Islands’ structures offer an exciting alternative to dredging. But East, who is not involved in the project, warns that they must be sited carefully to avoid interrupting sand flows that already build up islands’ coastlines. 

To do this, Tibbits and Invena cofounder Sarah Dole are conducting long-term satellite analysis of the En’boodhoofinolhu Lagoon to understand how sediment flows move around atolls. On the basis of this work, the team is currently spinning out a predictive coastal intelligence platform called Littoral. The aim is for it to be “a global health monitoring system for sediment transport,” Dole says. It’s meant not only to show where beaches are losing sand but to “tell us where erosion is going to happen,” allowing government agencies and developers to know where new structures like Ramp Rings can best be placed.

Growing Islands has been supported by the National Geographic Society, MIT, the Sri Lankan engineering group Sanken, and tourist resort developers. In 2023, it got a big bump from the US Agency for International Development: a $250,000 grant that funded the construction of the Ramp Ring deployment and would have provided opportunities to scale up the approach. But the termination of nearly all USAID contracts following the inauguration of President Trump means the project is looking for new partners.

Matthew Ponsford is a freelance reporter based in London.

AI is pushing the limits of the physical world

Architecture often assumes a binary between built projects and theoretical ones. What physics allows in actual buildings, after all, is vastly different from what architects can imagine and design (often referred to as “paper architecture”). That imagination has long been supported and enabled by design technology, but the latest advancements in artificial intelligence have prompted a surge in the theoretical. 

ai-generated shapes
Karl Daubmann, College of Architecture and Design at Lawrence Technological University
“Very often the new synthetic image that comes from a tool like Midjourney or Stable Diffusion feels new,” says Daubmann, “infused by each of the multiple tools but rarely completely derived from them.”

“Transductions: Artificial Intelligence in Architectural Experimentation,” a recent exhibition at the Pratt Institute in Brooklyn, brought together works from over 30 practitioners exploring the experimental, generative, and collaborative potential of artificial intelligence to open up new areas of architectural inquiry—something they’ve been working on for a decade or more, since long before AI became mainstream. Architects and exhibition co-­curators Jason Vigneri-Beane, Olivia Vien, Stephen Slaughter, and Hart Marlow explain that the works in “Transductions” emerged out of feedback loops among architectural discourses, techniques, formats, and media that range from imagery, text, and animation to mixed-­reality media and fabrication. The aim isn’t to present projects that are going to break ground anytime soon; architects already know how to build things with the tools they have. Instead, the show attempts to capture this very early stage in architecture’s exploratory engagement with AI.

Technology has long enabled architecture to push the limits of form and function. As early as 1963, Sketchpad, one of the first architectural software programs, allowed architects and designers to move and change objects on screen. Rapidly, traditional hand drawing gave way to an ever-expanding suite of programs—­Revit, SketchUp, and BIM, among many others—that helped create floor plans and sections, track buildings’ energy usage, enhance sustainable construction, and aid in following building codes, to name just a few uses. 

The architects exhibiting in “Trans­ductions” view newly evolving forms of AI “like a new tool rather than a profession-­ending development,” says Vigneri-Beane, despite what some of his peers fear about the technology. He adds, “I do appreciate that it’s a somewhat unnerving thing for people, [but] I feel a familiarity with the rhetoric.”

After all, he says, AI doesn’t just do the job. “To get something interesting and worth saving in AI, an enormous amount of time is required,” he says. “My architectural vocabulary has gotten much more precise and my visual sense has gotten an incredible workout, exercising all these muscles which have atrophied a little bit.”

Vien agrees: “I think these are extremely powerful tools for an architect and designer. Do I think it’s the entire future of architecture? No, but I think it’s a tool and a medium that can expand the long history of mediums and media that architects can use not just to represent their work but as a generator of ideas.”

Andrew Kudless, Hines College of Architecture and Design
This image, part of the Urban Resolution series, shows how the Stable Diffusion AI model “is unable to focus on constructing a realistic image and instead duplicates features that are prominent in the local latent space,” Kudless says.

Jason Vigneri-Beane, Pratt Institute
“These images are from a larger series on cyborg ecologies that have to do with co-creating with machines to imagine [other] machines,” says Vigneri-Beane. “I might refer to these as cryptomegafauna—infrastructural robots operating at an architectural scale.”

Martin Summers, University of Kentucky College of Design
“Most AI is racing to emulate reality,” says Summers. “I prefer to revel in the hallucinations and misinterpretations like glitches and the sublogic they reveal present in a mediated reality.”
Jason Lee, Pratt Institute
Lee typically uses AI “to generate iterations or high-resolution sketches,” he says. “I am also using it to experiment with how much realism one can incorporate with more abstract representation methods.”

Olivia Vien, Pratt Institute
For the series Imprinting Grounds, Vien created images digitally and fed them into Midjourney. “It riffs on the ideas of damask textile patterns in a more digital realm,” she says.

Robert Lee Brackett III, Pratt Institute
“While new software raises concerns about the absence of traditional tools like hand drawing and modeling, I view these technologies as collaborators rather than replacements,” Brackett says.
NASA has made an air traffic control system for drones

On Thanksgiving weekend of 2013, Jeff Bezos, then Amazon’s CEO, took to 60 Minutes to make a stunning announcement: Amazon was a few years away from deploying drones that would deliver packages to homes in less than 30 minutes. 

It lent urgency to a problem that Parimal Kopardekar, director of the NASA Aeronautics Research Institute, had begun thinking about earlier that year.

“How do you manage and accommodate large-scale drone operations without overloading the air traffic control system?” Kopardekar, who goes by PK, recalls wondering. Busy managing all airplane takeoffs and landings, air traffic controllers clearly wouldn’t have the capacity to oversee the fleets of package-delivering drones Amazon was promising. 

The solution PK devised, which subsequently grew into a collaboration between federal agencies, researchers, and industry, is a system called unmanned-­aircraft-system traffic management, or UTM. Instead of verbally communicating with air traffic controllers, drone operators using UTM share their intended flight paths with each other via a cloud-based network.

This highly scalable approach may finally open the skies to a host of commercial drone applications that have yet to materialize. Amazon Prime Air launched in 2022 but was put on hold after crashes at a testing facility, for example. On any given day, only 8,500 or so unmanned aircraft fly in US airspace, the vast majority of which are used for recreational purposes rather than for services like search and rescue missions, real estate inspections, video surveillance, or farmland surveys. 

One obstacle to wider use has been concern over possible midair drone-to-drone collisions. (Drones are typically restricted to airspace below 400 feet and their access to airports is limited, which significantly lowers the risk of drone-airplane collisions.) Under Federal Aviation Administration regulations, drones generally cannot fly beyond an operator’s visual line of sight, limiting flights to about a third of a mile. This prevents most collisions but also most use cases, such as delivering medication to a patient’s doorstep or dispatching a police drone to an active crime scene so first responders can better prepare before arriving.

Now, though, drone operators are increasingly incorporating UTM into their flights. The system uses path planning algorithms, like those that run in Google Maps, to chart a course that considers not only weather and obstacles like buildings and trees but the flight paths of nearby drones. It’ll automatically reroute a flight before takeoff if another drone has reserved the same volume of airspace at the same time, making the new flight trajectory visible to subsequent pilots. Drones can then fly autonomously to and from their destination, and no air traffic controller is required. 

Over the past decade, NASA and industry have demonstrated to the FAA through a series of tests that drones can safely maneuver around each other by adhering to UTM. And last summer, the agency gave the go-ahead for multiple drone delivery companies using UTM to begin flying simultaneously in the same airspace above Dallas—a first in US aviation history. Drone operators without in-house UTM capabilities have also begun licensing UTM services from FAA-approved third-party providers.

UTM only works if all participants abide by the same rules and agree to share data, and it’s enabled a level of collaboration unusual for companies competing to gain a foothold in a young, hot field, notes Peter Sachs, head of airspace integration strategy at Zipline, a drone delivery company based in South San Francisco that’s approved to use UTM. 

“We all agree that we need to collaborate on the practical, behind-the-scenes nuts and bolts to make sure that this preflight deconfliction for drones works really well,” Sachs says. (“Strategic deconfliction” is the technical term for processes that minimize drone-drone collisions.) Zipline and the drone delivery companies Wing, Flytrex, and DroneUp all operate in the Dallas area and are racing to expand to more cities, yet they disclose where they’re flying to one another in the interest of keeping the airspace conflict-free.

Greater adoption of UTM may be on the way. The FAA is expected to soon release a new rule called Part 108 that may allow operators to fly beyond visual line of sight if, among other requirements, they have some UTM capability, eliminating the need for the difficult-­to-obtain waiver the agency currently requires for these flights. To safely manage this additional drone traffic, drone companies will have to continue working together to keep their aircraft out of each other’s way. 

Yaakov Zinberg is a writer based in Cambridge, Massachusetts.

How a 1980s toy robot arm inspired modern robotics

As a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm. 

A drawing from the patent application for the Armatron robotic arm.
COURTESY OF TAKARA TOMY

Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was the rare toy that lived up to the hype printed on the front of the box. This was a legit robotic arm. You could rotate the arm to spin around its base, tilt it up and down, bend it at the “elbow” joint, rotate the “wrist,” and open and close the bright-­orange articulated hand in elegant chords of movement, all using only the twistable twin joysticks. 

Anyone who played with this toy will also remember the sound it made. Once you slid the power button to the On position, you heard a constant whirring sound of plastic gears turning and twisting. And if you tried to push it past its boundaries, it twitched and protested with a jarring “CLICK … CLICK … CLICK.”

It wasn’t just kids who found the Armatron so special. It was featured on the cover of the November/December 1982 issue of Robotics Age magazine, which noted that the $31.95 toy (about $96 today) had “capabilities usually found only in much more expensive experimental arms.”

pieces of the armatron disassembled and arranged on a table

JIM GOLDEN

A few years ago I found my Armatron, and when I opened the case to get it working again, I was startled to find that other than the compartment for the pair of D-cell batteries, a switch, and a tiny three-volt DC motor, this thing was totally devoid of any electronic components. It was purely mechanical. Later, I found the patent drawings for the Armatron online and saw how incredibly complex the schematics of the gearbox were. This design was the work of a genius—or a madman.

The man behind the arm

I needed to know the story of this toy. I reached out to the manufacturer, Tomy (now known as Takara Tomy), which has been in business in Japan for over 100 years. It put me in touch with Hiroyuki Watanabe, a 69-year-old engineer and toy designer living in Tokyo. He’s retired now, but he worked at Tomy for 49 years, building many classic handheld electronic toys of the ’80s, including Blip, Digital Diamond, Digital Derby, and Missile Strike. Watanabe’s name can be found on 44 patents, and he was involved in bringing between 50 and 60 products to market. Watanabe answered emailed questions via video, and his responses were translated from Japanese.

“I didn’t have a period where I studied engineering professionally. Instead, I enrolled in what Japan would call a technical high school that trains technical engineers, and I actually [entered] the electrical department there,” he told me. 

Afterward, he worked at Komatsu Manufacturing—because, he said, he liked bulldozers. But in 1974, he saw that Tomy was hiring, and he wanted to make toys. “I was told that it was the No. 1 toy company in Japan, so I decided [it was worth a look],” he said. “I took a night train from Tohoku to Tokyo to take a job exam, and that’s how I ended up joining the company.”

The inspiration for the Armatron came from a newspaper clipping that Watanabe’s boss brought to him one day. “It showed an image of a [mechanical arm] holding an egg with three fingers. I think we started out thinking, ‘This is where things are heading these days, so let’s make this,’” he recalled. 

As the lead of a small team, Watanabe briefly turned his attention to another project, and by the time he returned to the robotic arm, the team had a prototype. But it was quite different from the Armatron’s final form. “The hand stuck out from the main body to the side and could only move about 90 degrees. The control panel also had six movement positions, and they were switched using six switches. I personally didn’t like that,” said Watanabe. So he went back to work.

The Armatron’s inventor, Hiroyuki Watanabe, in Tokyo in 2025
COURTESY OF TAKARA TOMY

Watanabe’s breakthrough was inspired by the radio-controlled helicopters he operated as a hobby. Holding up a radio remote controller with dual joystick controls, he told me, “This stick operation allows you to perform four movements with two arms, but I thought that if you twist this part, you can use six movements.”

Watanabe at work at Tomy in Tokyo in 1982.
COURTESY OF HIROYUKI WATANABE

“I had always wanted to create a system that could rotate 360 degrees, so I thought about how to make that system work,” he added.

Watanabe stressed that while he is listed as the Armatron’s primary inventor, it was a team effort. A designer created the case, colors, and logo, adding touches to mimic features seen on industrial robots of the time, such as the rubber tubes (which are just for looks). 

When the Armatron first came out, in 1981, robotics engineers started contacting Watanabe. “I wasn’t so much hearing from people at toy stores, but rather from researchers at university laboratories, factories, and companies that were making industrial robots,” he said. “They were quite encouraging, and we often talked together.”

The long reach of the robot at Radio Shack

The bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics.

One of them was Adam Borrell, a mechanical design engineer who has been building robots for 15 years at Boston Dynamics, including Petman, the YouTube-famous Atlas, and the dog-size quadruped called Spot. 

Borrell grew up a few blocks away from a Radio Shack in New York City. “If I was going to the subway station, we would walk right by Radio Shack. I would stop in and play with it and set the timer, do the challenges,” he says. “I know it was a toy, but that was a real robot.” The Armatron was the hook that lured him into Radio Shack and then sparked his lifelong interest in engineering: “I would roll pennies and use them to buy soldering irons and solder at Radio Shack.” 

“There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

Borrell had a fateful reunion with the toy while in grad school for engineering. “One of my office mates had an Armatron at his desk,” he recalls, “and it was broken. We took it apart together, and that was the first time I had seen the guts of it. 

“It had this fantastic mechanical gear train to just engage and disengage this one motor in a bunch of different ways. And it was really fascinating that it had done so much—the one little motor. And that sort of got me back thinking about industrial robot arms again.” 

Eric Paulos, a professor of electrical engineering and computer science at the University of California, Berkeley, recalls nagging his parents about what an educational gift Armatron would make. Ultimately, he succeeded in his lobbying. 

“It was just endless exploration of picking stuff up and moving it around and even just watching it move. It was mesmerizing to me. I felt like I really owned my own little robot,” he recalls. “I cherish this thing. I still have it to this day, and it’s still working.” 

The Armatron on the cover of the November/December 1982 issue of Robotics Age magazine.
PUBLIC DOMAIN

Today, Paulos builds robots and teaches his students how to build their own. He challenges them to solve problems within constraints, such as building with cardboard or Play-Doh; he believes the restrictions facing Watanabe and his team ultimately forced them to be more creative in their engineering.

It’s not very hard to draw connections between the Armatron—an impossibly analog robot—and highly advanced machines that are today learning to move in incredible new ways, powered by AI advancements like computer vision and reinforcement learning.

Paulos sees parallels between the problems he tackled as a kid with his Armatron and those that researchers are still trying to deal with today: “What happens when you pick things up and they’re too heavy, but you can sort of pick it up if you approach it from different angles? Or how do you grip things? There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

While AI may be taking over the world of robotics, the field still requires engineers—builders and tinkerers who can problem-solve in the physical world. 

A page from the 1984 Radio Shack catalogue,
featuring the Armatron for $31.95.
COURTESY OF RADIOSHACKCATALOGS.COM

The Armatron encouraged kids to explore these analog mechanics, a reminder that not all breakthroughs happen on a computer screen. And that hands-on curiosity hasn’t faded. Today, a new generation of fans are rediscovering the Armatron through online communities and DIY modifications. Dozens of Armatron videos are on YouTube, including one where the arm has been modified to run on steam power

“I’m very happy to see people who love mechanisms are amazed,” Watanabe told me. “I’m really happy that there are still people out there who love our products in this way.” 

Jon Keegan writes about technology and AI and publishes Beautiful Public Data, a curated collection of government data sets (beautifulpublicdata.com).