This US startup makes a crucial chip material and is taking on a Japanese giant

It can be dizzying to try to understand all the complex components of a single computer chip: layers of microscopic components linked to one another through highways of copper wires, some barely wider than a few strands of DNA. Nestled between those wires is an insulating material called a dielectric, ensuring that the wires don’t touch and short out. Zooming in further, there’s one particular dielectric placed between the chip and the structure beneath it; this material, called dielectric film, is produced in sheets as thin as white blood cells. 

For 30 years, a single Japanese company called Ajinomoto has made billions producing this particular film. Competitors have struggled to outdo them, and today Ajinomoto has more than 90% of the market in the product, which is used in everything from laptops to data centers. 

But now, a startup based in Berkeley, California, is embarking on a herculean effort to dethrone Ajinomoto and bring this small slice of the chipmaking supply chain back to the US.

Thintronics is promising a product purpose-built for the computing demands of the AI era—a suite of new materials that the company claims have higher insulating properties and, if adopted, could mean data centers with faster computing speeds and lower energy costs. 

The company is at the forefront of a coming wave of new US-based companies, spurred by the $280 billion CHIPS and Science Act, that is seeking to carve out a portion of the semiconductor sector, which has become dominated by just a handful of international players. But to succeed, Thintronics and its peers will have to overcome a web of challenges—solving technical problems, disrupting long-standing industry relationships, and persuading global semiconductor titans to accommodate new suppliers. 

“Inventing new materials platforms and getting them into the world is very difficult,” Thintronics founder and CEO Stefan Pastine says. It is “not for the faint of heart.”

The insulator bottleneck

If you recognize the name Ajinomoto, you’re probably surprised to hear it plays a critical role in the chip sector: the company is better known as the world’s leading supplier of MSG seasoning powder. In the 1990s, Ajinomoto discovered that a by-product of MSG made a great insulator, and it has enjoyed a near monopoly in the niche material ever since. 

But Ajinomoto doesn’t make any of the other parts that go into chips. In fact, the insulating materials in chips rely on dispersed supply chains: one layer uses materials from Ajinomoto, another uses material from another company, and so on, with none of the layers optimized to work in tandem. The resulting system works okay when data is being transmitted over short paths, but over longer distances, like between chips, weak insulators act as a bottleneck, wasting energy and slowing down computing speeds. That’s recently become a growing concern, especially as the scale of AI training gets more expensive and consumes eye-popping amounts of energy. (Ajinomoto did not respond to requests for comment.) 

None of this made much sense to Pastine, a chemist who sold his previous company, which specialized in recycling hard plastics, to an industrial chemicals company in 2019. Around that time, he started to believe that the chemicals industry could be slow to innovate, and he thought the same pattern was keeping chipmakers from finding better insulating materials. In the chip industry, he says, insulators have “kind of been looked at as the redheaded stepchild”—they haven’t seen the progress made with transistors and other chip components. 

He launched Thintronics that same year, with the hope that cracking the code on a better insulator could provide data centers with faster computing speeds at lower costs. That idea wasn’t groundbreaking—new insulators are constantly being researched and deployed—but Pastine believed that he could find the right chemistry to deliver a breakthrough. 

Thintronics says it will manufacture different insulators for all layers of the chip, for a system designed to swap into existing manufacturing lines. Pastine tells me the materials are now being tested with a number of industry players. But he declined to provide names, citing nondisclosure agreements, and similarly would not share details of the formula. 

Without more details, it’s hard to say exactly how well the Thintronics materials compare with competing products. The company recently tested its materials’ Dk values, which are a measure of how effective an insulator a material is. Venky Sundaram, a researcher who has founded multiple semiconductor startups but is not involved with Thintronics, reviewed the results. Some of Thintronics’ numbers were fairly average, he says, but their most impressive Dk value is far better than anything available today.

A rocky road ahead

Thintronics’ vision has already garnered some support. The company received a $20 million Series A funding round in March, led by venture capital firms Translink and Maverick, as well as a grant from the US National Science Foundation. 

The company is also seeking funding from the CHIPS Act. Signed into law by President Joe Biden in 2022, it’s designed to boost companies like Thintronics in order to bring semiconductor manufacturing back to American companies and reduce reliance on foreign suppliers. A year after it became law, the administration said that more than 450 companies had submitted statements of interest to receive CHIPS funding for work across the sector. 

The bulk of funding from the legislation is destined for large-scale manufacturing facilities, like those operated by Intel in New Mexico and Taiwan Semiconductor Manufacturing Corporation (TSMC) in Arizona. But US Secretary of Commerce Gina Raimondo has said she’d like to see smaller companies receive funding as well, especially in the materials space. In February, applications opened for a pool of $300 million earmarked specifically for materials innovation. While Thintronics declined to say how much funding it was seeking or from which programs, the company does see the CHIPS Act as a major tailwind.

But building a domestic supply chain for chips—a product that currently depends on dozens of companies around the globe—will mean reversing decades of specialization by different countries. And industry experts say it will be difficult to challenge today’s dominant insulator suppliers, who have often had to adapt to fend off new competition. 

“Ajinomoto has been a 90-plus-percent-market-share material for more than two decades,” says Sundaram. “This is unheard-of in most businesses, and you can imagine they didn’t get there by not changing.”

One big challenge is that the dominant manufacturers have decades-long relationships with chip designers like Nvidia or Advanced Micro Devices, and with manufacturers like TSMC. Asking these players to swap out materials is a big deal.

“The semiconductor industry is very conservative,” says Larry Zhao, a semiconductor researcher who has worked in the dielectrics industry for more than 25 years. “They like to use the vendors they already know very well, where they know the quality.” 

Another obstacle facing Thintronics is technical: insulating materials, like other chip components, are held to manufacturing standards so precise they are difficult to comprehend. The layers where Ajinomoto dominates are thinner than a human hair. The material must also be able to accept tiny holes, which house wires running vertically through the film. Every new iteration is a massive R&D effort in which incumbent companies have the upper hand given their years of experience, says Sundaram.

If all this is completed successfully in a lab, yet another hurdle lies ahead: the material has to retain those properties in a high-volume manufacturing facility, which is where Sundaram has seen past efforts fail.

“I have advised several material suppliers over the years that tried to break into [Ajinomoto’s] business and couldn’t succeed,” he says. “They all ended up having the problem of not being as easy to use in a high-volume production line.” 

Despite all these challenges, one thing may be working in Thintronics’ favor: US-based tech giants like Microsoft and Meta are making headway in designing their own chips for the first time. The plan is to use these chips for in-house AI training as well as for the cloud computing capacity that they rent out to customers, both of which would reduce the industry’s reliance on Nvidia. 

Though Microsoft, Google, and Meta declined to comment on whether they are pursuing advancements in materials like insulators, Sundaram says these firms could be more willing to work with new US startups rather than defaulting to the old ways of making chips: “They have a lot more of an open mind about supply chains than the existing big guys.”

Modernizing data with strategic purpose

Data modernization is squarely on the corporate agenda. In our survey of 350 senior data and technology executives, just over half say their organization has either undertaken a modernization project in the past two years or is implementing one today. An additional one-quarter plan to do so in the next two years. Other studies also consistently point to businesses’ increased investment in modernizing their data estates.

It is no coincidence that this heightened attention to improving data capabilities coincides with interest in AI, especially generative AI, reaching a fever pitch. Indeed, supporting the development of AI models is among the top reasons the organizations in our research seek to modernize their data capabilities. But AI is not the only reason, or even the main one.

This report seeks to understand organizations’ objectives for their data modernization projects and how they are implementing such initiatives. To do so, it surveyed senior data and technology executives across industries. The research finds that many have made substantial progress and investment in data modernization. Alignment on data strategy and the goals of modernization appear to be far from complete in many organizations, however, leaving a disconnect between data and technology teams and the rest of the business. Data and technology executives and their teams can still do more to understand their colleagues’ data needs and actively seek their input on how to meet them.

Following are the study’s key findings:

AI isn’t the only reason companies are modernizing the data estate. Better decision-making is the primary aim of data modernization, with nearly half (46%) of executives citing this among their three top drivers. Support for AI models (40%) and for decarbonization (38%) are also major drivers of modernization, as are improving regulatory compliance (33%) and boosting operational efficiency (32%).

Data strategy is too often siloed from business strategy. Nearly all surveyed organizations recognize the importance of taking a strategic approach to data. Only 22% say they lack a fully developed data strategy. When asked if their data strategy is completely aligned with key business objectives, however, only 39% agree. Data teams can also do more to bring other business units and functions into strategy discussions: 42% of respondents say their data strategy was developed exclusively by the data or technology team.

Data strategy paves the road to modernization. It is probably no coincidence that most organizations (71%) that have embarked on data modernization in the past two years have had a data strategy in place for longer than that. Modernization goals require buy-in from the business, and implementation decisions need strategic guidance, lest they lead to added complexity or duplication.

Top data pain points are data quality and timeliness. Executives point to substandard data (cited by 41%) and untimely delivery (33%) as the facets of their data operations most in need of improvement. Incomplete or inaccurate data leads enterprise users to question data trustworthiness. This helps explain why the most common modernization measure taken by our respondents’ organizations in the past two years has been to review and upgrade data governance (cited by 45%).

Cross-functional teams and DataOps are key levers to improve data quality. Modern data engineering practices are taking root in many businesses. Nearly half of organizations (48%) are empowering cross-functional data teams to enforce data quality standards, and 47% are prioritizing implementing DataOps (cited by 47%). These sorts of practices, which echo the agile methodologies and product thinking that have become standard in software engineering, are only starting to make their way into the data realm.

Compliance and security considerations often hinder modernization. Compliance and security concerns are major impediments to modernization, each cited by 44% of the respondents. Regulatory compliance is mentioned particularly frequently by those working in energy, public sector, transport, and financial services organizations. High costs are another oft-cited hurdle (40%), especially among the survey’s smaller organizations.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

How ASML took over the chipmaking chessboard

On a drab Monday morning in San Jose, California, at the drab San Jose Convention Center, attendees of the SPIE Advanced Lithography and Patterning Conference filed into the main ballroom until all the seats were taken and the crowd began to line the walls along the back and sides of the room. The convention brings together people who work in the chip industry from all over the world. And on this cool February morning, they had gathered to hear tech industry luminaries extol the late Gordon Moore, Intel’s cofounder and first CEO. 

Craig Barrett, also a former CEO of Intel, paid tribute, as did the legendary engineer Burn-Jeng Lin, a pioneer of immersion lithography, a patterning technology that enabled the chip industry to continue moving forward about 20 years ago. Mostly the speeches tended toward reflections on Moore himself—testaments to his genius, accomplishments, and humanity. But the last speaker of the morning, Martin van den Brink, took a different tone, more akin to a victory lap than a eulogy. Van den Brink is the outgoing co-president and CTO of ASML, the Dutch company that makes the machines that in turn let manufacturers produce the most advanced computer chips in the world. 

Moore’s Law holds that the number of transistors on an integrated circuit doubles every two years or so. In essence, it means that chipmakers are always trying to shrink the transistors on a microchip in order to pack more of them in. The cadence has been increasingly hard to maintain now that transistor dimensions measure in a few nanometers. In recent years ASML’s machines have kept Moore’s Law from sputtering out. Today, they are the only ones in the world capable of producing circuitry at the density needed to keep chipmakers roughly on track. It is the premise of Moore’s Law itself, van den Brink said, that drives the industry forward, year after year. 

To showcase how big an achievement it had been to maintain Moore’s Law since he joined ASML in 1984, van den Brink referred to the rice and chessboard problem, in which the number of grains of rice—a proxy for transistors—is doubled on each successive square. The exponential growth in the number of transistors that can be crammed on a chip since 1959 means that a single grain of rice back then has now become the equivalent of three ocean tankers, each 240 meters long, full of rice. It’s a lot of rice! Yet Moore’s Law compels the company—compels all of the technology industry—to keep pushing forward. Each era of computing, most recently AI, has brought increased demands, explained van den Brink. In other words, while three tankers full of rice may seem like a lot, tomorrow we’re going to need six. Then 12. Then 24. And so on. 

ASML’s technology, he assured the gathering, would be there to meet the demands, thanks to the company’s investment in creating tools capable of making ever finer features: the extreme-ultraviolet (EUV) lithography machines it rolled out widely in 2017, the high-numerical-aperture (high-NA) EUV machines it is rolling out now, and the hyper-NA EUV machines it has sketched out for the future. 

The tribute may have been designed for Gordon Moore, but at the end of van den Brink’s presentation the entire room rose to give him a standing ovation. Because if Gordon Moore deserves credit for creating the law that drove the progress of the industry, as van den Brink says, van den Brink and ASML deserve much of the credit for ensuring that progress remains possible. 

Yet that also means the pressure is on. ASML has to try and stay ahead of the demands of Moore’s Law. It has to continue making sure chipmakers can keep doubling the amount of rice on the chessboard. Will that be possible? Van den Brink sat down with MIT Technology Review to talk about ASML’s history, its legacy, and what comes next. 

Betting big on an unwieldy wavelength

ASML is such an undisputed leader in today’s chip ecosystem that it’s hard to believe the company’s market dominance really only dates back to 2017, when its EUV machine, after 17 years of development, upended the conventional process for making chips. 

Since the 1960s, photolithography has made it possible to pack computer chips with more and more components. The process involves crafting small circuits by guiding beams of light through a series of mirrors and lenses and then shining that light on a mask, which contains a pattern. Light conveys the chip design, layer by layer, eventually building circuits that form the computational building blocks of everything from smartphones to artificial intelligence. 

Martin Van Den Brink

ASML

Photolithographers have a limited set of tools at their disposal to make smaller designs, and for decades, the type of light used in the machine was the most critical. In the 1960s, machines used beams of visible light. The smallest features this light could draw on the chip were fairly large—a bit like using a marker to draw a portrait. 

Then manufacturers began using smaller and smaller wavelengths of light, and by the early 1980s, they could make chips with ultraviolet light. Nikon and Canon were the industry leaders. ASML, founded in 1984 as a subsidiary of Philips in Eindhoven, the Netherlands, was just a small player.

The way van den Brink tells it, he arrived at the company almost by accident. Philips was one of a few technology companies in Holland. When he began his career there in 1984 and was looking into the various opportunities at the company, he became intrigued by a photo of a lithography machine.

“I looked at the picture and I said, ‘It has mechanics, it has optics, it has software—this looks like a complex machine. I will be interested in that,” van den Brink told MIT Technology Review. “They said, well, you can do it, but the company will not be part of Philips. We are creating a joint venture with ASM International, and after the joint venture, you will not be part of Philips. I said yes because I couldn’t care less. And that’s how it began.”

When van den Brink joined in the 1980s, little about ASML made the company stand out from other major lithography players at the time. “We didn’t sell a substantial amount of systems until the ’90s. And we almost went bankrupt several times in that period,” van den Brink says. “So for us there was only one mission: to survive and show a customer that we could make a difference.”

By 1995, it had a strong enough foothold in the industry against competitors Nikon and Canon to go public. But all lithography makers were fighting the same battle to create smaller components on chips. 

If you could have eavesdropped on a meeting at ASML in the late 1990s about this predicament, you might have heard chatter about an idea called extreme-ultraviolet (EUV) lithography—along with concerns that it might never work). By that point, with pressure to condense chips beyond current capabilities, it seemed as if everyone was chasing EUV. The idea was to pattern chips with an even smaller wavelength of light (ultimately just 13.5 nanometers). To do so, ASML would have to figure out how to create, capture, and focus this light—processes that had stumped researchers for decades—and build a supply chain of specialized materials, including the smoothest mirrors ever produced. And to make sure the price point wouldn’t drive away its customers. 

Canon and Nikon were also pursuing EUV, but the US government denied them a license to participate in the consortium of companies and US national labs researching it. Both subsequently dropped out. Meanwhile ASML acquired the fourth major company pursuing EUV, SVG, in 2001. By 2006 it had shipped only two EUV prototype machines to research facilities, and it took until 2010 to ship one to a customer. Five years later, ASML warned in its annual report that EUV sales remained low, that customers weren’t eager to adopt the technology given its slow speed on the production line, and that if the pattern continued, it could have “material” effects on the business given the significant investment. 

Yet in 2017, after an investment of $6.5 billion in R&D over 17 years, ASML’s bet began to pay off. That year the company shipped 10 of its EUV machines, which cost over $100 million each, and announced that dozens more were on backorder. EUV machines went to the titans of semiconductor manufacturing—Intel, Samsung, and Taiwan Semiconductor Manufacturing Company (TSMC)—and a small number of others. With a brighter light source (meaning less time needed to impart patterns), among other improvements, the machines were capable of faster production speeds. The leap to EUV finally made economic sense to chipmakers, putting ASML essentially in a monopoly position.

Chris Miller, a history professor at Tufts University and author of Chip War: The Fight for the World’s Most Critical Technology, says that ASML was culturally equipped to see those experiments through. “It’s a stubborn willingness to invest in technology that most people thought wouldn’t work,” he told MIT Technology Review. “No one else was betting on EUV, because the development process was so long and expensive. It involves stretching the limits of physics, engineering, and chemistry.”

A key factor in ASML’s growth was its control of the supply chain. ASML acquired number of the companies it relies on, like Cymer, a maker of light sources. That strategy of pointedly controlling power in the supply chain extended to ASML’s customers, too. In 2012, it offered shares to its three biggest customers, which were able to maintain market dominance of their own in part because of the elite manufacturing power of ASML’s machines. 

“Our success depends on their success,” van den Brink told MIT Technology Review

It’s also a testament to ASML’s dominance that it is for the most part no longer allowed to sell its most advanced systems to customers in China. Though ASML still does business in China, in 2019, following pressure from the Trump administration, the Dutch government began imposing restrictions on ASML’s exports of EUV machines to China. Those rules were tightened further just last year and now also impose limits on some of the company’s deep-ultraviolet (DUV) machines, which are used to make less highly advanced chips than EUV systems.

Van den Brink says the way world leaders are now discussing lithography was unimaginable when the company began: “Our prime minister was sitting in front of Xi Jinping, not because he was from Holland—who would give a shit about Holland. He was there because we are making EUV.”

Just a few years after the first EUV machines shipped, ASML would face its second upheaval. Around the start of the pandemic, interest and progress in the field of artificial intelligence sent demand for computing power skyrocketing. Companies like OpenAI needed ever more powerful computer chips and by late 2022 the frenzy and investment in AI began to boil over. 

By that time, ASML was closing in on its newest innovation. Having already adopted a smaller wavelength of light (and realigned the entire semiconductor industry to it in the process), it now turned its attention to the other lever in its control: numerical aperture. That’s the measure of how much light a system can focus, and if ASML could increase it, the company’s machines could print even smaller components.

Doing so meant myriad changes. ASML had to source an even larger set of mirrors from its supplier Carl Zeiss, which had to be made ultra-smooth. Zeiss had to build entirely new machines, the sole purpose of which was to measure the smoothness of mirrors destined for ASML. The aim was to reduce the number of costly repercussions the change would have on the rest of the supply chain, like the companies that make reticles containing the designs of the chips. 

In December of 2023, ASML began shipping the first of its next-generation EUV device, a high-NA machine, to Intel’s facility in Hillsboro, Oregon. It’s an R&D version, and so far the only one in the field. It took seven planes and 50 trucks to get it to Intel’s plant, and installation of the machine, which is larger than a double-decker bus, will take six months. 

The high-NA machines will only be needed to produce the most precise layers of advanced chips for the industry; the designs on many others will still be printed using the previous generation of EUV machines or older DUV machines. 

ASML has received orders for high-NA machines from all its current EUV customers. They don’t come cheap: reports put the cost at $380 million. Intel was the first customer to strike, ordering the first machine available in early 2022. The company, which has lost significant market share to competitor TSMC, is betting that the new technology will give it a new foothold in the industry, even though other chipmakers will eventually have access to it too. 

“There are obvious benefits to Intel for being the first,” Miller says. “There are also obvious risks.” Sorting out which chips to use these machines for and how to get its money’s worth out of them will be a challenge for the company, according to Miller. 

The launch of these machines, if successful, might be seen as the crowning achievement of van den Brink’s career. But he is already moving on to what comes next.

The future

The next big idea for ASML, according to van den Brink and other company executives who spoke with MIT Technology Review, is hyper-NA technology. The company’s high-NA machines have a numerical aperture of .55. Hyper-NA tools would have a numerical aperture higher than 0.7. What that ultimately means is that hyper NA, if successful, will allow the company to create machines that let manufacturers shrink transistor dimensions even more—assuming that researchers can devise chip components that work well at such small dimensions. As it was with EUV in the early 2000s, it is still uncertain whether hyper NA is feasible—if nothing else, it could be cost prohibitive. Yet van den Brink projects cautious confidence. It is likely, he says, that the company will ultimately have three offerings available: low NA, high NA, and—if all goes well—hyper NA. 

“Hyper NA is a bit more risky,” says van den Brink. “We will be more cautious and more cost sensitive in the future. But if we can pull this off, we have a winning trio which takes care of all the advanced manufacturing for the foreseeable future.”

Yet although today everyone is banking on ASML to keep pushing the industry forward, there is speculation that a competitor could emerge from China. Van den Brink was dismissive of this possibility, citing the gap in even last-generation lithography. 

SMEE are making DUV machines, or at least claim they can,” he told MIT Technology Review, referring to a company that makes the predecessor to EUV lithography technology, and pointed out that ASML still has the dominant market share. The political pressures could mean more progress for China. But getting to the level of complexity involved in ASML’s suite of machines, with low, high, and hyper NA is another matter, he says: “I feel quite comfortable that this will be a long time before they can copy that.”

Miller, from Tufts University, is confident that Chinese companies will eventually develop these sorts of technologies on their own, but agrees that the question is when. “If it’s in a decade, it will be too late,” he says. 

The real question, perhaps, is not who will make the machines, but whether Moore’s Law will hold at all. Nvidia CEO Jensen Huang has already declared it dead. But when asked what he thought might eventually cause Moore’s Law to finally stall out, van den Brink rejected the premise entirely. 

“There’s no reason to believe this will stop. You won’t get the answer from me where it will end,” he said. “It will end when we’re running out of ideas where the value we create with all this will not balance with the cost it will take. Then it will end. And not by the lack of ideas.”

He had struck a similar posture during his Moore tribute at the SPIE conference, exuding confidence. “I’m not sure who will give the presentation 10 years from now,” he said, going back to his rice analogy. “But my successors,” he claimed, “will still have the opportunity to fill the chessboard.”

This story was updated to clarify information about ASML’s operations in China.

VR headsets can be hacked with an Inception-style attack

In the Christoper Nolan movie Inception, Leonardo DiCaprio’s character uses technology to enter his targets’ dreams to steal information and insert false details into their subconscious. 

A new “inception attack” in virtual reality works in a similar way. Researchers at the University of Chicago exploited a security vulnerability in Meta’s Quest VR system that allows hackers to hijack users’ headsets, steal sensitive information, and—with the help of generative AI—manipulate social interactions. 

The attack hasn’t been used in the wild yet, and the bar to executing it is high, because it requires a hacker to gain access to the VR headset user’s Wi-Fi network. However, it is highly sophisticated and leaves those targeted vulnerable to phishing, scams, and grooming, among other risks. 

In the attack, hackers create an app that injects malicious code into the Meta Quest VR system and then launch a clone of the VR system’s home screen and apps that looks identical to the user’s original screen. Once inside, attackers can see, record, and modify everything the person does with the headset. That includes tracking voice, gestures, keystrokes, browsing activity, and even the user’s social interactions. The attacker can even change the content of a user’s messages to other people. The research, which was shared with MIT Technology Review exclusively, is yet to be peer reviewed.

A spokesperson for Meta said the company plans to review the findings: “We constantly work with academic researchers as part of our bug bounty program and other initiatives.” 

VR headsets have slowly become more popular in recent years, but security research has lagged behind product development, and current defenses against attacks in VR are lacking. What’s more, the immersive nature of virtual reality makes it harder for people to realize they’ve fallen into a trap. 

“The shock in this is how fragile the VR systems of today are,” says Heather Zheng, a professor of computer science at the University of Chicago, who led the team behind the research. 

Stealth attack

The inception attack exploits a loophole in Meta Quest headsets: users must enable “developer mode” to download third-party apps, adjust their headset resolution, or screenshot content, but this mode allows attackers to gain access to the VR headset if they’re using the same Wi-Fi network. 

Developer mode is supposed to give people remote access for debugging purposes. However, that access can be repurposed by a malicious actor to see what a user’s home screen looks like and which apps are installed. (Attackers can also strike if they are able to access a headset physically or if a user downloads apps that include malware.) With this information, the attacker can replicate the victim’s home screen and applications. 

Then the attacker stealthily injects an app with the inception attack in it. The attack is activated and the VR headset hijacked when unsuspecting users exit an application and return to the home screen. The attack also captures the user’s display and audio stream, which can be livestreamed back to the attacker. 

In this way, the researchers were able to see when a user entered login credentials to an online banking site. Then they were able to manipulate the user’s screen to show an incorrect bank balance. When the user tried to pay someone $1 through the headset, the researchers were able to change the amount transferred to $5 without the user realizing. This is because the attacker can control both what the user sees in the system and what the device sends out. 

This banking example is particularly compelling, says Jiasi Chen, an associate professor of computer science at the University of Michigan, who researches virtual reality but was not involved in the research. The attack could probably be combined with other malicious tactics, such as tricking people to click on suspicious links, she adds. 

The inception attack can also be used to manipulate social interactions in VR. The researchers cloned Meta Quest’s VRChat app, which allows users to talk to each other through their avatars. They were then able to intercept people’s messages and respond however they wanted. 

Generative AI could make this threat even worse because it allows anyone to instantaneously clone people’s voices and generate visual deepfakes, which malicious actors could then use to manipulate people in their VR interactions, says Zheng. 

Twisting reality

To test how easily people can be fooled by the inception attack, Zheng’s team recruited 27 volunteer VR experts. The participants were asked to explore applications such as a game called Beat Saber, where players control light sabers and try to slash beats of music that fly toward them. They were told the study aimed to investigate their experience with VR apps. Without their knowledge, the researchers launched the inception attack on the volunteers’ headsets. 

The vast majority of participants did not suspect anything. Out of 27 people, only 10 noticed a small “glitch” when the attack began, but most of them brushed it off as normal lag. Only one person flagged some kind of suspicious activity. 

There is no way to authenticate what you are seeing once you go into virtual reality, and the immersiveness of the technology makes people trust it more, says Zheng. This has the potential to make such attacks especially powerful, says Franzi Roesner, an associate professor of computer science at the University of Washington, who studies security and privacy but was not part of the study.

The best defense, the team found, is restoring the headset’s factory settings to remove the app. 

The inception attack gives hackers many different ways to get into the VR system and take advantage of people, says Ben Zhao, a professor of computer science at the University of Chicago, who was part of the team doing the research. But because VR adoption is still limited, there’s time to develop more robust defenses before these headsets become more widespread, he says. 

How Wi-Fi sensing became usable tech

Over a decade ago, Neal Patwari lay in a hospital bed, carefully timing his breathing. Around him, 20 wireless transceivers stood sentry. As Patwari’s chest rose and fell, their electromagnetic waves rippled around him. Patwari, now a professor at Washington University in St. Louis, had just demonstrated that those ripples could reveal his breathing patterns. 

A few years later, researchers from MIT were building a startup around the idea of using Wi-Fi signals to detect falls. They hoped to help seniors live more independently in their homes. In 2015, their prototype made it to the Oval Office: by way of demonstration, one of the researchers tripped and fell in front of President Obama. (Obama deemed the invention “pretty cool.”) 

It’s a tantalizing idea: that the same routers bringing you the internet could also detect your movements. “It’s like this North Star for everything ambient sensing,” says Sam Yang, who runs the health-sensor startup Xandar Kardian. For a while, he says, “investors just flocked in.”

Fast-forward nearly a decade: we have yet to see a commercially viable Wi-Fi device for tracking breathing or detecting falls. In 2022, the lighting company Sengled demonstrated a Wi-Fi lightbulb that could supposedly do both—but it still hasn’t been released. The startup that made its case to Obama now uses other radio waves. One breathing-­monitor startup, called Asleep, set out to use Wi-Fi sensing technology but has pivoted to using microphones instead. Patwari also started his own company to make Wi-Fi breathing monitors. But, he says, “we got beaten out by Google.” 

Wi-Fi sensing as a way to monitor individual health metrics has, for the most part, been eclipsed by other technologies, like ultra-wideband radar. But Wi-Fi sensing hasn’t gone away. Instead, it has quietly become available in millions of homes, supported by leading internet service providers, smart-home companies, and chip manufacturers. Wi-Fi’s ubiquity continues to make it an attractive platform to build upon, especially as networks continually become more robust. Soon, thanks to better algorithms and more standardized chip designs, it could be invisibly monitoring our day-to-day movements for all sorts of surprising—and sometimes alarming—purposes.

Yes, it could track your breathing. It could monitor for falls. It may make buildings smarter, and increase energy efficiency by tracking where people are. The flip side of this, however, is that it could also be used for any number of more nefarious purposes. Someone outside your home could potentially tell when it’s vacant, or see what you are doing inside. Consider all the reasons someone might want to secretly track someone else’s movements. Wi-Fi sensing has the potential to make many of those uses possible. What’s more, this technology interprets the physical properties of electromagnetic waves, not the encrypted data they carry. It represents a new kind of privacy risk, and one for which safeguards are still being developed.


Google’s Sleep Sensing feature is built into its Nest Hub; it tracks breathing, snoring, and coughing for whoever is sleeping closest to the device, using not Wi-Fi sensing but a radar chip. Otherwise, Google’s approach is basically the same as Patwari’s: first use electromagnetic waves to sense tiny movements, and then use AI to make those movements make sense. The main difference is the length of the waves. Shorter wavelengths offer more bandwidth and thus more accuracy; longer wavelengths allow sensing over greater distances. The waves that ripple out from most Wi-Fi-enabled devices are two or five inches long: they can cover a lot of ground. The waves from Google’s radar chip, in contrast, are just five millimeters long and can provide much more detail. To come even close, Wi-Fi sensing needs to look at how waves from multiple devices interact. But if it can do that, it will combine detail with range—without the need for special radar chips or dedicated devices like wearables. If Wi-Fi sensing becomes a default option in smart devices like lightbulbs—a push that is already beginning to take place—then those devices can start monitoring you. And as Wi-Fi sensing technology improves, these devices can start watching in more detail.

Initially, says Patwari, “[Wi-Fi] resolution was pretty poor.” Locations were accurate to only two meters, and two people chatting next to each other could look like one person. Over the past decade, researchers have been working to squeeze more information from the longer wavelengths used by commercial routers. More important, they are using AI to make sense of metadata that describes how waves scatter or fade, known as “channel state information.” That gives them much more information to work with. Sixteen years ago, “we would be able to know pretty reliably that a person had walked by,” Patwari says. “But now, people are getting gait information—what somebody’s walking pattern is like.” Still, while Wi-Fi sensing is getting more detailed, the reliability of those details remains iffy. “The signal is just not clean enough,” says Yang. 

If Wi-Fi sensing becomes a default option in smart devices—a push already taking place—those devices can start monitoring you.

Meanwhile, AI advances that are helping Wi-Fi sensing improve are also helping radar. Some of the uses that made Wi-Fi sensing exciting a decade ago are now commercially available with dedicated radar devices that use shorter wavelengths.

Inspiren, a radar company working in hospitals and long-term-care facilities, combines data from radars and cameras mounted above beds for fall detection. It both alerts staff to falls and flags the moments when frail patients are most at risk of falling, like when they get out of bed. Yang’s sensor company sells an FDA-cleared medical device that can monitor heart rates from above hospital beds or jail cells—no wearables required. Some of these devices are already in use in Kentucky jails, where the goal is to help prevent overdoses and other medical emergencies. 

Then there’s a much creepier use case: spying through walls. Patwari earned a “Wi-Fi Signals Can See Through Walls” headline back in 2009 when the technology detected motion in another room. In January 2023 new versions of that headline reappeared, this time for a story about Carnegie Mellon researchers who used an AI engine called DensePose to generate body shapes from Wi-Fi signals. (The accuracy was far from perfect.) Radar that senses people through walls has existed for years; it is used by SWAT teams, border patrol, search-and-rescue teams, and the military. 

Yet Daniel Kahn Gillmor, a staff technologist at the ACLU, has flagged Wi-Fi sensing by state actors as a potential privacy concern, particularly for activists. “We have lots of examples of law enforcement overreach,” he says. “If law enforcement gains access to this data and uses it to harass people, it’s another chunk of metadata that can be abused.” 

Wi-Fi sensing is already replacing other motion detection tools. It may also help make some current radar applications widely available—albeit with less reliability in many cases. In both contexts, Gillmor says, it could be used by corporations to monitor consumers, workers, and union organizers; by stalkers or domestic abusers to harass their victims; and by other nefarious actors to commit a variety of crimes. The fact that people cannot currently tell they are being monitored adds to the risk. “We need both legal and technical guardrails,” Gillmor says.

Wi-Fi sensing may also usher in new forms of monitoring. With its longer wavelengths, Wi-Fi could cover more ground than millimeter-wave radar. As the MIT team demonstrated back in 2015, it might eventually detect falls in private homes instead of hospitals. But there is a reason Google’s Nest tracks breathing from one nightstand instead of from every lightbulb in the house: in the real world, context is hard for algorithms to parse. In a hospital room, a fall is probably a fall. In a home, Grandma falling looks a lot like a child jumping off the couch. So researchers are working on ways to reidentify known users. In addition to flagging falls, a tool like that could spot a burglar while a family is on vacation; depending on the settings and the context, it could also spot a teenager coming home after curfew, activists holding a meeting, or—in countries that enforce sodomy laws—two people of the same sex sleeping in the same bed.

Of course, further along the electromagnetic spectrum, security cameras and nanny cams can already track individuals with ease. “You’ve got to remember: the context you get out of the camera is just crazy,” says Taj Manku, CEO of the Wi-Fi sensing company Cognitive Systems. “You see the person’s face. You see whether they’re doing jumping jacks or they’re doing exercise or they’re doing something bad.”

But the mere fact that existing tools may achieve overlapping results, Gillmor says, does not lower the risk: “That way lies privacy nihilism.” 

Whether or not Wi-Fi can beat other sensors at their own games, integrating Wi-Fi sensing with those tools could eventually enhance the strengths of each. For now, commercial providers are taking advantage of Wi-Fi’s range to focus on home security, along with one other area where they believe that Wi-Fi sensing is already the best solution. Spence Maid, CEO of the Wi-Fi sensing company Origin Wireless, puts it this way: “I hate to even say it, but—‘Is Mom alive?’”    


Until a year and a half ago, Emily Nikolich, 96, lived on her own in a condo in New Jersey. Each day, her grandchildren sent her new photos in an app on her tablet. Each day, Emily could spy on her five great-grandchildren, ages newborn to four. Meanwhile, her son Paul Nikolich, 68, was spying on her.

In 2021, Paul installed a Wi-Fi sensing tool from Origin Wireless called Hex Home. Five small, glowing disks plugged in around Emily’s home—with her permission—helped Paul to triangulate her position. He showed me the app. It didn’t track Emily per se; instead, it tracked movement near each disk. But since Emily lived alone, the effect was the same: Paul could easily watch her daily journeys from bed to brunch to bathroom and back.

It was “a relief,” says Paul, to know that his mom was okay, even when he was traveling and couldn’t call. So when Emily moved into an assisted living home last year, the monitors came with her. Hex has learned Emily’s routine; if something out of the ordinary happens—if, for example, she stays in bed all day—it can send Paul an alert. So far, he hasn’t had any. “Fortunately, she’s been doing really well,” he says. 

In practice, Wi-Fi sensing still has a hard time with details. But it is very good at noticing human presence, regardless of walls or furniture. That accuracy, according to Manku? “It’s 100%.” That makes Wi-Fi sensing great for energy management (lightbulb maker WiZ uses it to turn the lights off in empty rooms) and for cutting back on false alarms from home security systems. It can also be helpful in places with aging populations. In Japan, Maid says, “they’re having the mail delivery people knock on doors and make sure people are still alive.” An Okinawa-based company is developing a proof-of-life service using Origin’s technology.

Manku estimates that at least 30 million homes already have some kind of Wi-Fi sensing available. One of Verizon’s new Fios routers now ships with Origin Wireless’s “human presence detection” built in. Stationary smart things already on the network—like lightbulbs, smart plugs, speakers, or Google Nests—can instantly become sensors. Other internet service providers are creating similar offerings; Cognitive Systems partners with more than 160 ISPs. This January, Cognitive Systems announced that its technology will soon be available in many of the cheap smart plugs for sale on Amazon, allowing people to use Wi-Fi sensing through their existing Google, Apple, and Amazon Alexa smart-home apps.

Eventually, the Wi-Fi sensing companies I spoke with would like to go even bigger: serving not just homes and small businesses, but also larger office buildings or stores. Wi-Fi sensing, Manku says, could help firefighters locate people behind smoke too dense to see through; smart HVAC could leave the AC on for people working late. Occupancy data could help companies make post-pandemic downsizing decisions; foot-traffic data could inform in-store product placement. But to be useful in those complex scenarios, Wi-Fi would need to accurately count and locate lots of people. 

Jie Yang, a researcher at Florida State University, is thinking bigger and in a slightly different direction: he is counting and locating people—and then tracking them individually. “Five years ago, most of the work focused on a single person,” Yang says. “Right now, we are trying to target multiple persons, like a family.” Recent research has focused on reidentifying target individuals when multiple people are present, using walking patterns or breathing rate. In a 2023 paper, Yang showed that it was possible to reidentify people in new environments. But for that research to work in the real world, even for just a handful of family members or employees, researchers won’t just need better AI; they will also need better hardware. 

That’s where Emily’s son Paul comes in. 


For the past 22 years, the younger Nikolich has chaired an obscure but influential group within the Institute of Electrical and Electronics Engineers: the 802 LAN/MAN Standards Committee, which sets the technical standards for Wi-Fi and Ethernet compatibility. 

In 2019, Nikolich attended an IEEE dinner in Washington, DC. Ray Liu, Origin Wireless’s founder and a recent IEEE president, was sitting across the table from him, discussing Wi-Fi sensing with another attendee. Nikolich started listening in. He had been thinking about how to wire—and unwire—the internet since around the time URLs were invented. But here, suddenly, was something different. “I was very excited about it,” Nikolich says. 

Nikolich and Liu started talking, and Nikolich expressed his support for a subcommittee devoted to Wi-Fi sensing. Since 2020, the 802.11bf Task Group for WLAN Sensing, led by experts from companies like Huawei and Qualcomm, has been working on standards for chipmakers designed to make Wi-Fi sensing easier. Crucially, when the new standards go into effect, the channel state information that Wi-Fi sensing algorithms use will become more consistent. Right now, that information requires lots of qualifying and debugging. When the new standard comes out in 2025, it will allow “every Wi-Fi device to easily and reliably extract the signal measurements,” Yang says. That alone should help get more Wi-Fi sensing products on the market. “It will be explosive,” Liu believes.

The longer-term use cases imagined by the committee include counting and finding people in homes or in stores, detecting children left in the back seats of cars, and identifying gestures, along with long-­standing goals like detecting falls, heart rates, and respiration.

Where such goals are concerned, three other IEEE subcommittees may also make a difference. The first is 802.11be, better known as Wi-Fi 7. Wi-Fi 7, which rolls out this year, will open up an extra band of radio frequencies for new Wi-Fi devices to use, which means more channel state information for algorithms to play with. It also adds support for more tiny antennas on each Wi-Fi device, which should help algorithms triangulate positions more accurately. With Wi-Fi 7, Yang says, “the sensing capability can improve by one order of magnitude.” 

The Wi-Fi 8 standard, expected in a few years, could lead to another leap in detail and accuracy. Combined with more advanced algorithms, Yang says, Wi-Fi 8 could allow sensors to track not just a few people per router, but 10 to 20. Then, sharing information between routers could make it possible to count and track individuals moving through crowded indoor spaces like airports. 

Finally, a less widely used standard known as WiGig already allows Wi-Fi devices to operate in the millimeter-wave space used by radar chips like the one in the Google Nest. If that standard ever takes off, it could allow other applications identified by the Wi-Fi sensing task group to become commercially viable. These include reidentifying known faces or bodies, identifying drowsy drivers, building 3D maps of objects in rooms, or sensing sneeze intensity (the task group, after all, convened in 2020).

There is one area that the IEEE is not working on, at least not directly: privacy and security. For now, says Oscar Au, an IEEE fellow and member of the Wi-Fi sensing task group who is a vice president at Origin Wireless, the goal is to focus on “at least get the sensing measurements done.” He says that the committee did discuss privacy and security: “Some individuals have raised concerns, including myself.” But they decided that while those concerns do need to be addressed, they are not within the committee’s mandate.


When Wi-Fi signals are used to send data, the information being sent back and forth over the electromagnetic waves can be encrypted so that it can’t be intercepted by hackers. But the waves themselves just exist; they can’t be encrypted in quite the same way.

“Even if your data is encrypted,” says Patwari, “somebody sitting outside of your house could get information about where people are walking inside of the house—maybe even who is doing the walking.” With time, skill, and the right equipment, they could potentially watch your keystrokes, read your lips, or listen to sound waves; with good enough AI, they might be able to interpret them. “I mean,” Patwari clarifies, “the current technology I think would work best is looking inside the window, right?” 

Wherever there is Wi-Fi, walls are now more porous. But right now, the only people who can do this kind of spying are researchers—and people who can replicate their results. That latter group includes state governments, Jie Yang confirms. “It’s likely that this is already happening,” Yang says. “That is: I don’t know that people are actually doing that. But I’m sure that we are capable of doing that.” 

So more than a decade after he first started trying to use Wi-Fi signals to reveal location information, Patwari is now trying to do the opposite. Recently, he completed a project sponsored by the US Army Research Office, designing strategies to introduce noise and false positives into channel state information to make it harder for unauthorized devices to spy. The EU recently sponsored a project called CSI-MURDER (so called because it obfuscates, or kills, the channel state information). There are plenty of reasons to prevent eavesdropping; for one, Patwari says, the US Army might want “to make sure that they can provide Wi-Fi on a base or whatever and not have audio of what’s going on inside the base eavesdropped outside.” 

Plenty of governments already spy on their own citizens, including the US and China—both hubs of Wi-Fi sensing research. That is a risk here too. Even though the most sensitive Wi-Fi sensing data is often stored locally, intelligence agencies could easily monitor that data in person—with or without a warrant or subpoena, depending on the circumstances. They could also access any reports sent to the cloud. For many Americans, though, the bigger privacy risk may come from ordinary users, not from government eavesdroppers. Gillmor notes that the tools already on the market for detecting human presence could create an extra hurdle for people experiencing domestic abuse. “I’m really glad to hear that a stalker would follow the Verizon terms of service, but color me a little bit skeptical,” he adds.

Palak Shah, who leads the social innovation lab at the National Domestic Workers Alliance, says she could imagine upsides for Wi-Fi sensing. “Wage theft is a very common problem in our industry,” she says. A tool that helps nannies, housekeepers, or care workers prove they were in the home could help ensure proper payment. But, she says, “it’s usually the case that things end up being used against the worker even if there’s a potential for it to be used for them,” and “that inherent power dynamic is really hard to disrupt.”

The National Domestic Workers Alliance has helped pass bills in several states to make it illegal to “monitor or record” in bathrooms. In comparison, Wi-Fi sensing is often touted as “privacy protecting” because it does not show naked bodies. But, Gillmor says, “just because it is a sensing mode that humans do not natively have does not mean that it can’t be invasive.”

In another sense, Wi-Fi sensing is more concerning than cameras, because it can be completely invisible. You can spot a nanny cam if you know what to look for. But if you are not the person in charge of the router, there is no way to know if someone’s smart lightbulbs are monitoring you—unless the owner chooses to tell you. This is a problem that could be addressed to some extent with labeling and disclosure requirements, or with more technical solutions, but none currently exist. 

I asked Liu what advice he would give to lawmakers wrestling with these new concerns. He told me one senator has already asked. “This is a technology that can help change the world and make lives better. Elder care, security, energy management—everything,” he says. “Nevertheless, we as a society need to draw a red line. Whatever the red line is—it’s not my job to decide—here is the red line we do not cross.”

Meg Duff is a reporter and audio producer based in Brooklyn. She covers science, technology, and climate change.

Algorithms are everywhere

Like a lot of Netflix subscribers, I find that my personal feed tends to be hit or miss. Usually more miss. The movies and shows the algorithms recommend often seem less predicated on my viewing history and ratings, and more geared toward promoting whatever’s newly available. Still, when a superhero movie starring one of the world’s most famous actresses appeared in my “Top Picks” list, I dutifully did what 78 million other households did and clicked.

As I watched the movie, something dawned on me: recommendation algorithms like the ones Netflix pioneered weren’t just serving me what they thought I’d like—they were also shaping what gets made. And not in a good way. 

cover of Filterworld: How Algorithms Flattened Culture by Kyle Chayka

DOUBLEDAY

The movie in question wasn’t bad, necessarily. The acting was serviceable, and it had high production values and a discernible plot (at least for a superhero movie). What struck me, though, was a vague sense of déjà vu—as if I’d watched this movie before, even though I hadn’t. When it ended, I promptly forgot all about it. 

That is, until I started reading Kyle Chayka’s recent book, Filterworld: How Algorithms Flattened Culture. A staff writer for the New Yorker, Chayka is an astute observer of the ways the internet and social media affect culture. “Filterworld” is his coinage for “the vast, interlocking … network of algorithms” that influence both our daily lives and the “way culture is distributed and consumed.” 

Music, film, the visual arts, literature, fashion, journalism, food—Chayka argues that algorithmic recommendations have fundamentally altered all these cultural products, not just influencing what gets seen or ignored but creating a kind of self-reinforcing blandness we are all contending with now.

That superhero movie I watched is a prime example. Despite my general ambivalence toward the genre, Netflix’s algorithm placed the film at the very top of my feed, where I was far more likely to click on it. And click I did. That “choice” was then recorded by the algorithms, which probably surmised that I liked the movie and then recommended it to even more viewers. Watch, wince, repeat.  

“Filterworld culture is ultimately homogenous,” writes Chayka, “marked by a pervasive sense of sameness even when its artifacts aren’t literally the same.” We may all see different things in our feeds, he says, but they are increasingly the same kind of different. Through these milquetoast feedback loops, what’s popular becomes more popular, what’s obscure quickly disappears, and the lowest-­common-denominator forms of entertainment inevitably rise to the top again and again. 

This is actually the opposite of the personalization Netflix promises, Chayka notes. Algorithmic recommendations reduce taste—traditionally, a nuanced and evolving opinion we form about aesthetic and artistic matters—into a few easily quantifiable data points. That oversimplification subsequently forces the creators of movies, books, and music to adapt to the logic and pressures of the algorithmic system. Go viral or die. Engage. Appeal to as many people as possible. Be popular.  

A joke posted on X by a Google engineer sums up the problem: “A machine learning algorithm walks into a bar. The bartender asks, ‘What’ll you have?’ The algorithm says, ‘What’s everyone else having?’” “In algorithmic culture, the right choice is always what the majority of other people have already chosen,” writes Chayka. 

One challenge for someone writing a book like Filterworld—or really any book dealing with matters of cultural import—is the danger of (intentionally or not) coming across as a would-be arbiter of taste or, worse, an outright snob. As one might ask, what’s wrong with a little mindless entertainment? (Many asked just that in response to Martin Scorsese’s controversial Harper’s essay  in 2021, which decried Marvel movies and the current state of cinema.) 

Chayka addresses these questions head on. He argues that we’ve really only traded one set of gatekeepers (magazine editors, radio DJs, museum curators) for another (Google, Facebook, TikTok, Spotify). Created and controlled by a handful of unfathomably rich and powerful companies (which are usually led by a rich and powerful white man), today’s algorithms don’t even attempt to reward or amplify quality, which of course is subjective and hard to quantify. Instead, they focus on the one metric that has come to dominate all things on the internet: engagement.

There may be nothing inherently wrong (or new) about paint-by-numbers entertainment designed for mass appeal. But what algorithmic recommendations do is supercharge the incentives for creating only that kind of content, to the point that we risk not being exposed to anything else.

“Culture isn’t a toaster that you can rate out of five stars,” writes Chayka, “though the website Goodreads, now owned by Amazon, tries to apply those ratings to books. There are plenty of experiences I like—a plotless novel like Rachel Cusk’s Outline, for example—that others would doubtless give a bad grade. But those are the rules that Filterworld now enforces for everything.”

Chayka argues that cultivating our own personal taste is important, not because one form of culture is demonstrably better than another, but because that slow and deliberate process is part of how we develop our own identity and sense of self. Take that away, and you really do become the person the algorithm thinks you are. 

Algorithmic omnipresence

As Chayka points out in Filterworld, algorithms “can feel like a force that only began to exist … in the era of social networks” when in fact they have “a history and legacy that has slowly formed over centuries, long before the Internet existed.” So how exactly did we arrive at this moment of algorithmic omnipresence? How did these recommendation machines come to dominate and shape nearly every aspect of our online and (increasingly) our offline lives? Even more important, how did we ourselves become the data that fuels them?

cover of How Data Happened

W.W. NORTON

These are some of the questions Chris Wiggins and Matthew L. Jones set out to answer in How Data Happened: A History from the Age of Reason to the Age of Algorithms. Wiggins is a professor of applied mathematics and systems biology at Columbia University. He’s also the New York Times’ chief data scientist. Jones is now a professor of history at Princeton. Until recently, they both taught an undergrad course at Columbia, which served as the basis for the book.

They begin their historical investigation at a moment they argue is crucial to understanding our current predicament: the birth of statistics in the late 18th and early 19th century. It was a period of conflict and political upheaval in Europe. It was also a time when nations were beginning to acquire both the means and the motivation to track and measure their populations at an unprecedented scale.

“War required money; money required taxes; taxes required growing bureaucracies; and these bureaucracies needed data,” they write. “Statistics”may have originally described “knowledge of the state and its resources, without any particularly quantitative bent or aspirations at insights,” but that quickly began to change as new mathematical tools for examining and manipulating data emerged.

One of the people wielding these tools was the 19th-century Belgian astronomer Adolphe Quetelet. Famous for, among other things, developing the highly problematic body mass index (BMI), Quetelet had the audacious idea of taking the statistical techniques his fellow astronomers had developed to study the position of stars and using them to better understand society and its people. This new “social physics,” based on data about phenomena like crime and human physical characteristics, could in turn reveal hidden truths about humanity, he argued.

“Quetelet’s flash of genius—whatever its lack or rigor—was to treat averages about human beings as if they were real quantities out there that we were discovering,” write Wiggins and Jones. “He acted as if the average height of a population was a real thing, just like the position of a star.” 

From Quetelet and his “average man” to Francis Galton’s eugenics to Karl Pearson and Charles Spearman’s “general intelligence,” Wiggins and Jones chart a depressing progression of attempts—many of them successful—to use data as a scientific basis for racial and social hierarchies. Data added “a scientific veneer to the creation of an entire apparatus of discrimination and disenfranchisement,” they write. It’s a legacy we’re still contending with today. 

Another misconception that persists? The notion that data about people are somehow objective measures of truth. “Raw data is an oxymoron,” observed the media historian Lisa Gitelman a number of years ago. Indeed, all data collection is the result of human choice, from what to collect to how to classify it to who’s included and excluded. 

Whether it’s poverty, prosperity, intelligence, or creditworthiness, these aren’t real things that can be measured directly, note Wiggins and Jones. To quantify them, you need to choose an easily measured proxy. This “reification” (“literally, making a thing out of an abstraction about real things”) may be necessary in many cases, but such choices are never neutral or unproblematic. “Data is made, not found,” they write, “whether in 1600 or 1780 or 2022.”

“We don’t need to build systems that learn the stratifications of the past and present and reinforce them in the future.”

Perhaps the most impressive feat Wiggins and Jones pull off in the book as they continue to chart data’s evolution throughout the 20th century and the present day is dismantling the idea that there is something inevitable about the way technology progresses. 

For Quetelet and his ilk, turning to numbers to better understand humans and society was not an obvious choice. Indeed, from the beginning, everyone from artists to anthropologists understood the inherent limitations of data and quantification, making some of the same critiques of statisticians that Chayka makes of today’s algorithmic systems (“Such statisticians ‘see quality not at all, but only quantity’”).

Whether they’re talking about the machine-learning techniques that underpin today’s AI efforts or an internet built to harvest our personal data and sell us stuff, Wiggins and Jones recount many moments in history when things could have just as likely gone a different way.

“The present is not a prison sentence, but merely our current snapshot,” they write. “We don’t have to use unethical or opaque algorithmic decision systems, even in contexts where their use may be technically feasible. Ads based on mass surveillance are not necessary elements of our society. We don’t need to build systems that learn the stratifications of the past and present and reinforce them in the future. Privacy is not dead because of technology; it’s not true that the only way to support journalism or book writing or any craft that matters to you is spying on you to service ads. There are alternatives.” 

A pressing need for regulation

If Wiggins and Jones’s goal was to reveal the intellectual tradition that underlies today’s algorithmic systems, including “the persistent role of data in rearranging power,” Josh Simons is more interested in how algorithmic power is exercised in a democracy and, more specifically, how we might go about regulating the corporations and institutions that wield it.

cover of Algorithms for the People

PRINCETON UNIVERSITY PRESS

Currently a research fellow in political theory at Harvard, Simons has a unique background. Not only did he work for four years at Facebook, where he was a founding member of what became the Responsible AI team, but he previously served as a policy advisor for the Labour Party in the UK Parliament. 

In Algorithms for the People: Democracy in the Age of AI, Simons builds on the seminal work of authors like Cathy O’Neil, Safiya Noble, and Shoshana Zuboff to argue that algorithmic prediction is inherently political. “My aim is to explore how to make democracy work in the coming age of machine learning,” he writes. “Our future will be determined not by the nature of machine learning itself—machine learning models simply do what we tell them to do—but by our commitment to regulation that ensures that machine learning strengthens the foundations of democracy.”

Much of the first half of the book is dedicated to revealing all the ways we continue to misunderstand the nature of machine learning, and how its use can profoundly undermine democracy. And what if a “thriving democracy”—a term Simons uses throughout the book but never defines—isn’t always compatible with algorithmic governance? Well, it’s a question he never really addresses. 

Whether these are blind spots or Simons simply believes that algorithmic prediction is, and will remain, an inevitable part of our lives, the lack of clarity doesn’t do the book any favors. While he’s on much firmer ground when explaining how machine learning works and deconstructing the systems behind Google’s PageRank and Facebook’s Feed, there remain omissions that don’t inspire confidence. For instance, it takes an uncomfortably long time for Simons to even acknowledge one of the key motivations behind the design of the PageRank and Feed algorithms: profit. Not something to overlook if you want to develop an effective regulatory framework. 

“The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.”

Much of what’s discussed in the latter half of the book will be familiar to anyone following the news around platform and internet regulation (hint: that we should be treating providers more like public utilities). And while Simons has some creative and intelligent ideas, I suspect even the most ardent policy wonks will come away feeling a bit demoralized given the current state of politics in the United States. 

In the end, the most hopeful message these books offer is embedded in the nature of algorithms themselves. In Filterworld, Chayka includes a quote from the late, great anthropologist David Graeber: “The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.” It’s a sentiment echoed in all three books—maybe minus the “easily” bit. 

Algorithms may entrench our biases, homogenize and flatten culture, and exploit and suppress the vulnerable and marginalized. But these aren’t completely inscrutable systems or inevitable outcomes. They can do the opposite, too. Look closely at any machine-learning algorithm and you’ll inevitably find people—people making choices about which data to gather and how to weigh it, choices about design and target variables. And, yes, even choices about whether to use them at all. As long as algorithms are something humans make, we can also choose to make them differently. 

Bryan Gardiner is a writer based in Oakland, California.

How Antarctica’s history of isolation is ending—thanks to Starlink

“This is one of the least visited places on planet Earth and I got to open the door,” Matty Jordan, a construction specialist at New Zealand’s Scott Base in Antarctica, wrote in the caption to the video he posted to Instagram and TikTok in October 2023. 

In the video, he guides viewers through an empty, echoing hut, pointing out where the men of Ernest Shackleton’s 1907 expedition lived and worked—the socks still hung up to dry and the provisions still stacked neatly in place, preserved by the cold. 

Jordan, who started making TikToks to keep family and friends up to date with his life in Antarctica, has now found himself at the center of a phenomenon. His channels have over a million followers. The video of Shackleton’s hut alone has racked up millions of views from all over the world. It’s also kind of a miracle: until very recently, those who lived and worked on Antarctic bases had no hope of communicating so readily with the outside world. 

Antarctica has long been a world apart. In the 19th and early 20th centuries, when dedicated expeditions began, explorers were cut off from home for years at a time, reliant on ships sailing back and forth from civilization to carry physical mail. They were utterly alone, the only humans for thousands of miles.

This made things difficult, emotionally and physically. With only the supplies they had on hand, explorers were limited in the scientific experiments they could conduct. They couldn’t send an SOS if they needed help (which was fairly often). And also—importantly, because many relied on publicity for funding—they couldn’t let the world know what was going on. 

In 1911, an Australian expedition led by Douglas Mawson was the first to take an antenna to the continent and attempt to transmit and receive wireless signals. But while Mawson was able to send a few messages during the team’s first season, he never received any back, so he didn’t know if his had been successful.

The winds at their base at Cape Denison, on the Antarctic coast directly south of Australia, raged at 70 kilometers an hour—every day, every night, for months on end. They finally succeeded in raising the mast during their second winter, only to be faced with a different problem: their radio operator was unable to work, having suffered psychosis during the six months of darkness. So the expedition was left isolated again. 

While Antarctic telecommunications have been steadily improving ever since the first permanent bases were established, many decades after Mawson’s ill-fated trip, life on the ice has always been characterized by some level of disconnection. And as life at home has become ever more dependent on constant connection, instant updates, streaming, and algorithms, living in Antarctica has been seen as a break—for better and for worse—from all the digital hustle-bustle. 

But the end of that long-standing disparity is now in sight. Starlink, the satellite constellation developed by Elon Musk’s company SpaceX to service the world with high-speed broadband internet, has come to Antarctica, finally bringing with it the sort of connectivity enjoyed by the world beyond the ice. 

Mawson and sledge Adelie
Douglas Mawson and his team had difficulty
raising a radio antenna during the expedition
they embarked on in 1911.
 a ticker tape parade for Admiral Byrd returning from Antartica in New York City
In the late 1920s, radio communication
fed the appetite of a public eager for
news of the suave Richard E. Byrd’s adventures, and he quickly became a celebrity.

Stories from the first Antarctic expeditions were so scarce they were a hot commodity: newspapers would pay top dollar to be told the news the minute explorers like Mawson and Shackleton arrived back in port. Now videos, posts, and FaceTime calls are common from people stationed at Antarctic bases and field camps, and from the surging numbers of tourists on ships. 

Suddenly, after more than a century as one of the least connected parts of the world, the seventh continent feels a lot closer to the others. For those whose lives and livelihoods take them there regularly, it’s been a long time coming. 

Taking the public with you

People have always been hungry for news of life in Antarctica. In the early days, regular updates about derring-do in the polar winds were the perfect way to capture the attention of the press—a key to securing the funding necessary for the outsize private expeditions of the early 20th century.

No one exemplified the close relationship between exploration and attention better than Admiral Richard E. Byrd, a charismatic self-publicist who named his succession of bases on the Ross Ice Shelf “Little America” and brought along a Boy Scout to represent America’s youth. Byrd was the quintessential explorer-­celebrity, constantly making headlines with his daring feats.

His first privately funded expedition, in 1929, aimed to reach the South Pole by plane. Byrd gave frequent updates on his progress to the press via radiotelegraphy, using wireless signals to send messages in Morse code directly from Little America to coastal stations in San Francisco and Long Island. A New York Times reporter embedded with the expedition filed stories almost daily over the radiotelegraph, and the public followed Byrd’s every move, culminating in the historic flight over the South Pole on November 29, 1929.

By the time of Byrd’s next expedition, in 1933, technology had progressed enough to allow the first audio broadcasting station in Antarctica. The station made use of shortwave radio’s long-distance capabilities to transmit official mission reports, and it was also able to receive messages for the expedition members. A weekly variety program organized by the expedition’s journalist, Charles Murphy, was broadcast live to the public on AM stations. 

This innovative program allowed audiences at home to feel they were taking part in the expedition themselves. Like popular radio shows of the day such as The Shadow and The Lone Ranger, Adventures with Admiral Byrd was an action-packed serial, featuring updates on the progress of the expedition directly from the hardy explorers themselves. The scientists also gave weather reports and talks, and they performed songs and skits. 

men working to assemble McMurdo Station
In 1957, the Navy’s Construction Battalions were deployed to build McMurdo Station on Ross Island, near the site of Captain Robert Scott’s first hut.
US NAVY/UNITED STATES ANTARCTIC PROGRAM

In the program’s most popular segment, Americans were able to chat with the men in Little America live on air. Little America’s postmaster spoke to his wife on their 21st anniversary; Al Carbone, the eccentric cook for the expedition, spoke to the chef at New York’s Waldorf-Astoria hotel. 

“The spirited narratives of real-life adventure are making interesting program fare for the world’s radio listeners who have been accustomed to the make-believe studio dramatizations usually available on the broadcast channels,” read a 1934 review in Radio News. The program indelibly associated its main sponsor, General Foods’ Grape-Nuts cereal, with the bravery of Admiral Byrd and his companions, and brought Antarctica vividly into the lives of millions of listeners on a wider scale than ever before. 

Helpful hams and secret codes

By 1957,Admiral Byrd was recognized as the world’s foremost expert in Antarctic exploration and was leading America’s Operation Deep Freeze, a mission to build a permanent American presence on the continent. The US Naval Construction Battalions, known as the Seabees, were deployed to build McMurdo Station on the solid ground of Ross Island, close to the first hut built by Captain Robert Scott in 1901. 

Deep Freeze brought a massive military presence to Antarctica, including the most complex and advanced communications array the Navy could muster. Still, men who wanted to speak to loved ones at home had limited options. Physical mail could come and go on ships a few times a year, or they could send expensive telegrams over wireless—limited to 100 or 200 words per month each way. At least these methods were private, unlike the personal communications over radio on Byrd’s expedition, which everyone else could listen in to by default.

In the face of these limitations, another option soon became popular among the Navy men. The licensed operators of McMurdo’s amateur (ham) station were assisted by hams back at home. Seabees would call from McMurdo to a ham in America, who would patch them straight through to their destination through the US phone system, free of charge. 

Some of these helpful hams became legendary. Jules Madey and his brother John, two New Jersey teenagers with the call sign K2KGJ, had built a 110-foot-tall radio tower in their backyard, with a transmitter that was more than capable of communicating to and from McMurdo Sound. 

To save money, a code known as “WYSSA” offered a broad variety of set phrases for common topics. WYSSA itself stood for “All my love, darling.”

From McMurdo, the South Pole, and the fifth Little America base on the Ross Ice Shelf, ham operators could ring Jules at nearly any time of day or night, and he’d connect them to home. Jules became an Antarctic celebrity and icon. A few of the engaged couples he helped to link up even invited him and his brother to their weddings, after the men returned from their tours of duty in Antarctica. Many Deep Freeze men still remembered the Madey brothers decades later. 

In the early 1960s, continued Deep Freeze operations, including support ships, were improving communication across American outposts in Antarctica. Bigger antennas, more powerful receivers and transmitters, and improvements to ground-to-air communication systems were installed, shoring up the capacity for scientific activity, transport, and construction.  

Around this time, the Australian National Antarctic Research Expeditions were improving their communications capacity as well. Like other Antarctic programs, they used telex machines, sending text out over radio waves to link up with a phone-line-based system on land. Telex, a predecessor to fax technology, text messaging, and email, was in use from the 1960s onwards as an alternative to Morse code and voice over HF and VHF radio. On the other side of the line, a terminal would receive the text and print it out.

a smiling person in a t-shirt types at a telex
The Australian National Antarctic Research Expeditions sent text over radio waves
and developed a special code known as “WYSSA”
to save money on the expensive telex rates.
MALCOLM MACFARLANE ©ANTARCTICA NEW ZEALAND PICTORIAL COLLECTION

In order to save money on the expensive per-word rates, a special code known as “WYSSA” (pronounced, in an Australian accent, “whizzer”) was constructed. This creative solution became legendary in Antarctic history. WYSSA itself stood for “All my love, darling,” and the code offered a broad variety of predetermined phrases for common topics, from the inconveniences of Antarctic life (YAYIR—“Fine snow has penetrated through small crevices in the huts”) to affectionate sentiments (YAAHY—“Longing to hear from you again, darling”) and personal updates (YIGUM—“I have grown a beard which is awful”). 

Changing times

Visit stations in Antarctica today, and you will see massive radio domes that now dot the landscape. Inside are the dishes that track the satellites the stations rely on. Satellite schedules are published weekly on the US Antarctic Program’s website, showing the windows when connectivity is available. 

The first satellites, run by Inmarsat, came online in the early 1980s and were a huge improvement on radio transmission. Inmarsat’s network provided coverage up to 75° latitude, 9° south of the Antarctic circle—which meant it now covered some, although not all, Antarctic bases. (McMurdo Station and Scott Base, down at 77° S, missed out.) It also allowed for high-quality service at any time of day or night, unaffected by atmospheric disturbances.

In the ’90s, the Iridium constellation of low-orbit satellites became operational. These satellites were launched into polar orbits and provided continuous service to all parts of Antarctica. Widespread satellite phone and email access quickly replaced radio as the best way to speak to those back home. But a thousand seasonal workers at McMurdo still had to share an internet link with a paltry capacity of 17 megabits per second, accessible only by a few in-­demand Ethernet cables. Phone calls, while possible, were still inconveniently expensive, and video calls thanklessly difficult.

Now the satellite revolution has taken its next step. The 2022–’23 season in Antarctica brought an exciting development for those on the ice: the first trial of SpaceX’s Starlink satellite connection.

Its introduction means even the most remote parts of the region—where a great deal of important scientific work takes place—are becoming more connected. When Peter Neff, a glaciologist at the University of Minnesota, first went out in the field in the Antarctic summer season of 2009–’10, he had to send a physical thumb drive back to McMurdo to share photos of his field camps.

Now Neff is director of field research and data at the Center for Oldest Ice Exploration, or COLDEX, a National Science Foundation–funded project aiming to seek out the oldest possible ice-core records in Antarctica to solve mysteries about Earth’s past climate. He spearheaded the installation of a Starlink connection at COLDEX’s field site in the 2022–’23 summer season, making it among the first field camps to have access to high-speed internet connectivity. The test was free thanks to Neff’s connections at the NSF, but for the 2023–’24 summer season, his camp is paying just $250 a month for 50 gigabytes or $1,000 for a terabyte, along with a $2,500 flat fee for the terminal. 

The team now has an easy link with the world, allowing participants to, among other things, text photos of the weather to transport pilots and trade spreadsheets with remote logistics managers. Starlink “helps with just complete ease of communications, in the form that we’re all used to,” Neff says. Matty Jordan, the TikTok creator at Scott Base, agrees that Starlink has made doing science there easier. “Scientists are able to transmit larger amounts of data back to New Zealand, which makes their work much faster and more efficient,” he says.

And crucially, it has helped scientists at the bases communicate their work to the public.

“Social media is an easy way for people to see what happens on a research station and helps people engage with the work that happens there,” Jordan says, emphasizing the opportunity for people at home to learn more about the importance of climate research.

“The first thing that people took advantage of it for was outreach,” agrees Ed Brook, COLDEX’s director, “and the possibility of talking to television or radio or print journalists in the field.” Just as with Byrd and his radio show, the first use for communication upgrades is often to satisfy the worldwide appetite for stories straight from the ice. 

High bandwidth in the ice

The next step to open up Antarctic communications could be a proposed fiber-optic undersea cable linking New Zealand and McMurdo Station. It would cost upwards of $200 million, but as an NSF workshop determined, it would be a huge boon not just to the US and New Zealand but to all countries with Antarctic programs, and the science they conduct.

According to the report released by the workshop in 2021 (before the Starlink deployment), the entire US Antarctic scientific community shared the same connective capacity available to many individual American households or small businesses—well under 100 Mbps per individual end user. A fiber link capable of anywhere from 100 gigabits per second to 300 terabits per second would bring Antarctic research up to the level of connectivity enjoyed by scientists in many other places.

No telecommunications upgrades can change the nature of Antarctica, or the emotions it can stir up.

It would also make Antarctica an even more popular place for tourists. In 2023, the number of visitors was 40% over pre-­pandemic levels. More than 100,000 people were expected to visit in the 2023–’24 winter, mainly on expedition cruise ships to the Antarctic Peninsula. 

Lizzie Williams, a product manager at the travel agency Swoop Antarctica, has seen firsthand the changes Starlink and improved internet connectivity have brought to tourist excursions. “It’s now possible to send through compressed photos and videos,” she told me over email. “We have even seen some people FaceTiming their families, though it can be a bit glitchy.”

According to Williams, daily inquiries about accessing the internet on board have increased. Growing numbers of people even try to work remotely from Antarctic cruises. But she warns that connection on most ships is far too expensive and unreliable for Zoom calls, and that Swoop’s cruise staff urges a low-technology environment for guests. “We encourage them to be out on deck enjoying the icebergs and wildlife, really making the most of their precious time in Antarctica,” she says. 

Some who live on the ice also favor keeping the experience as low-tech as possible. While social media may have its tentacles wrapped around the rest of the world, Antarctica has been, up to a point, safe until now. 

a person walking up to a radio dome
Today, massive radio domes dot the landscape. Hiding inside are dishes that can track the satellites, in various states of orbital decay, that the stations rely on.
Inmarsat satellites
Satellite connectivity became available
to Antarctic stations close to the coast in
1982, via the dedicated satellites of Inmarsat.

Luckily, so far, it seems as if Starlink hasn’t upset the special, close-knit atmosphere at Antarctica’s outposts. Demie Huffman, a PhD student in land and atmospheric science at the University of Minnesota, surveyed participants in COLDEX’s 2022–’23 field year about their experiences with the service. “People generally ended up being pleasantly surprised with the minimal impact that it had on group cohesion,” she says. “They would have a movie night together, instead of just having to read a book or rely on the things that a couple of people had thought to bring.” 

Still, Antarctica’s regular residents want to make sure the arrival of high-speed internet doesn’t change things too much as time goes on. “Over the winter, we made a rule that no phones were allowed at the dinner table to ensure that people built personal relationships with others on base,” says Jordan.

In any case, Antarctica will always be a magical place, even if it is no longer isolated from communication with the rest of the world. Ever since Scott and Shackleton published their best-selling books, one of its greatest natural resources has been its stories.People just can’t get enough of penguins, crevasses, tales of adventure, and natural spectacle. There’s a reason reporters swarmed returning explorers at train stations—the same reason Jordan’s TikToks rack up views by the million. 

And while going there no longer means stepping out of time and into another world entirely, no telecommunications upgrades can change the nature of the place, or the emotions it can stir up. 

The same intense winds still blow where Shackleton’s men once lived, the same sun still hangs in the sky for six months of endless daylight, and the icy landscapes still exert an inexplicable pull on human hearts and minds. 

Only now, you can share the magic freely with everyone at home, without delay. 

Allegra Rosenberg covers media, the environment, and technology. Her book Fandom Forever (and Ever) is forthcoming from W.W. Norton.

Wikimedia’s CTO: In the age of AI, human contributors still matter

Selena Deckelmann has never been afraid of people on the internet. With a TV repairman and CB radio enthusiast for a grandfather and a pipe fitter for a stepdad, Deckelmann grew up solving problems by talking and tinkering. So when she found her way to Linux, one of the earliest open-source operating systems, as a college student in the 1990s, the online community felt comfortingly familiar. And the thrilling new technology inspired Deckelmann to change her major from chemistry to computer science. 

Now almost three decades into a career in open-source technology, Deckelmann is the chief product and technology officer (CPTO) at the Wikimedia Foundation, the nonprofit that hosts and manages Wikipedia. There she not only guides one of the most turned-to sources of information in the world but serves a vast community of “Wikipedians,” the hundreds of thousands of real-life individuals who spend their free time writing, editing, and discussing entries—in more than 300 languages—to make Wikipedia what it is today. 

It is undeniable that technological advances and cultural shifts have transformed our online universe over the years—especially with the recent surge in AI-generated content—but Deckelmann still isn’t afraid of people on the internet. She believes they are its future.  

In the summer of 2022, when she stepped into the newly created role of CPTO, Deckelmann didn’t know that a few months later, the race to build generative AI would accelerate to a breakneck pace. With the release of OpenAI’s ChatGPT and other large language models, and the multibillion-dollar funding cycle that followed, 2023 became the year of the chatbot. And because these models require heaps of cheap (or, preferably, even free) content to function, Wikipedia’s tens of millions of articles have become a rich source of fuel. 

To anyone who’s spent time on the internet, it makes sense that bots and bot builders would look to Wikipedia to strengthen their own knowledge collections. Over its 23 years, Wikipedia has become one of the most trusted sources for information—and a totally free one, thanks to the site’s open-source mission and foundation support. But with the proliferation of AI-generated text and images contributing to a growing misinformation and disinformation problem, Deckelmann must tackle an existential question for Wikipedia’s product and community: How can the site’s open-source ethos survive the coming content flood? 

Deckelmann argues that Wikipedia will become an even more valuable resource as nuanced, human perspectives become harder to find online. But fulfilling that promise requires continued focus on preserving and protecting Wikipedia’s beating heart: the Wikipedians who volunteer their time and care to keep the information up to date through old-fashioned talking and tinkering. Deckelmann and her team are dedicated to an AI strategy that prioritizes building tools for contributors, editors, and moderators to make their work faster and easier, while running off-platform AI experiments with ongoing feedback from the community. “My role is to focus attention on sustainability and people,” says Deckelmann. “How are we really making life better for them as we’re playing around with some cool technology?”

What Deckelmann means by “sustainability” is a pressing concern in the open-source space more broadly. When complex services or entire platforms like Wikipedia depend on the time and labor of volunteers, contributors may not get the support they need to keep going—and keep those projects afloat. Building sustainable pathways for the people who make the internet has been Deckelmann’s personal passion for years. In addition to working as an engineering and product leader at places like Intel and Mozilla and contributing to open-source projects herself, she has founded, run, and advised multiple organizations and conferences that support open-source communities and open doors for contributors from underrepresented groups. “She has always put the community first, even when the community is full of jerks making life unnecessarily hard,” says Valerie Aurora, who cofounded the Ada Initiative—a former nonprofit supporting women in open-source technology that had brought Deckelmann into its board of directors and advisory board. 

Addressing both a community’s needs and an organization’s priorities can be a challenging balancing act—one that is at the core of open-source philosophy. At the Wikimedia Foundation, everything from the product’s long-term direction to details on its very first redesign in decades is open for public feedback from Wikipedia’s enormous and vocal community. 

Today Deckelmann sees a newer sustainability problem in AI development: the predominant method for training models is to pull content from sites like Wikipedia, often generated by open-source creators without compensation or even, sometimes, awareness of how their work will be used. “If people stop being motivated to [contribute content online],” she warns, “either because they think that these models are not giving anything back or because they’re creating a lot of value for a very small number of people—then that’s not sustainable.” At Wikipedia, Deckelmann’s internal AI strategy revolves around supporting contributors with the technology rather than short-circuiting them. The machine-learning and product teams are working on launching new features that, for example, automate summaries of verbose debates on a wiki’s “Talk” pages (where back-and-forth discussions can go back as far as 20 years) or suggest related links when editors are updating pages. “We’re looking at new ways that we can save volunteers lots of time by summarizing text, detecting vandalism, or responding to different kinds of threats,” she says.

But the product and engineering teams are also preparing for a potential future where Wikipedia may need to meet its readers elsewhere online, given current trends. While Wikipedia’s traffic didn’t shift significantly during ChatGPT’s meteoric rise, the site has seen a general decline in visitors over the last decade as a result of Google’s ongoing search updates and generational changes in online behavior. In July 2023, as part of a project to explore how the Wikimedia Foundation could offer its knowledge base as a service to other platforms, Deckelmann’s team launched an AI experiment: a plug-in for ChatGPT’s platform that allows the chatbot to use and summarize Wikipedia’s most up-to-date information to answer a user’s query. The results of that experiment are still being analyzed, but Deckelmann says it’s far from clear how and even if users may want to interact with Wikipedia off the platform. Meanwhile, in February she convened leaders from open-source technology, research, academia, and industry to discuss ways to collaborate and coordinate on addressing the big, thorny questions raised by AI. It’s the first of multiple meetings that Deckelmann hopes will push forward the conversation around sustainability. 

Deckelmann’s product approach is careful and considered—and that’s by design. In contrast to so much of the tech industry’s mad dash to capitalize on the AI hype, her goal is to bring Wikipedia forward to meet the moment, while supporting the complex human ecosystem that makes it special. It’s a particularly humble mission, but one that follows from her career-long dedication to supporting healthy and sustainable communities online. “Wikipedia is an incredible thing, and you might look at it and think, ‘Oh, man, I want to leave my mark on it.’ But I don’t,” she says. “I want to help [Wikipedia] out just enough that it’s able to keep going for a really long time.” She has faith that the people of the internet can take it from there.

Rebecca Ackermann is a writer, designer, and artist based in San Francisco.

Inside the hunt for new physics at the world’s largest particle collider

In 1977, Ray and Charles Eames released a remarkable film that, over the course of just nine minutes, spanned the limits of human knowledge. Powers of Ten begins with an overhead shot of a man on a picnic blanket inside a one-square-­meter frame. The camera pans out: 10, then 100 meters, then a kilometer, and eventually all the way to the then-known edges of the observable universe—1024 meters. There, at the farthest vantage, it reverses. The camera zooms back in, flying through galaxies to arrive at the picnic scene, where it plunges into the man’s skin, digging down through successively smaller scales: tissues, cells, DNA, molecules, atoms, and eventually atomic nuclei—10-14 meters. The narrator’s smooth voice-over ends the journey: “As a single proton fills our scene, we reach the edge of present understanding.” 

During the intervening half-century, particle physicists have been exploring the subatomic landscape where Powers of Ten left off. Today, much of this global effort centers on CERN’s Large Hadron Collider (LHC), an underground ring 17 miles (27 kilometers) around that straddles the border between Switzerland and France. There, powerful magnets guide hundreds of trillions of protons as they do laps at nearly the speed of light underneath the countryside. When a proton headed clockwise plows into a proton headed counterclockwise, the churn of matter into energy transmutes the protons into debris: electrons, photons, and more exotic subatomic bric-a-brac. The newly created particles explode radially outward, where they are picked up by detectors. 

In 2012, using data from the LHC, researchers discovered a particle called the Higgs boson. In the process, they answered a nagging question: Where do fundamental particles, such as the ones that make up all the protons and neutrons in our bodies, get their mass? A half-­century earlier, theorists had cautiously dreamed the Higgs boson up, along with an accompanying field that would invisibly suffuse space and provide mass to particles that interact with it. When the particle was finally found, scientists celebrated with champagne. A Nobel for two of the physicists who predicted the Higgs boson soon followed.

But now, more than a decade after the excitement of finding the Higgs, there is a sense of unease, because there are still unanswered questions about the fundamental constituents of the universe. 

Perhaps the most persistent of these questions is the identity of dark matter, a mysterious substance that binds galaxies together and makes up 27% of the cosmos’s mass. We know dark matter must exist because we have astronomical observations of its gravitational effects. But since the discovery of the Higgs, the LHC has seen no new particles—of dark matter or anything else—despite nearly doubling its collision energy and quintupling the amount of data it can collect. Some physicists have said that particle physics is in a “crisis,” but there is disagreement even on that characterization: another camp insists the field is fine and still others say that there is indeed a crisis, but that crisis is good. “I think the community of particle phenomenologists is in a deep crisis, and I think people are afraid to say those words,” says Yoni Kahn, a theorist at the University of Illinois Urbana-Champaign. 

The anxieties of particle physicists may, at first blush, seem like inside baseball. In reality, they concern the universe, and how we can continue to study it—of interest if you care about that sort of thing. The past 50 years of research have given us a spectacularly granular view of nature’s laws, each successive particle discovery clarifying how things really work at the bottom. But now, in the post-Higgs era, particle physicists have reached an impasse in their quest to discover, produce, and study new particles at colliders. “We do not have a strong beacon telling us where to look for new physics,” Kahn says. 

So, crisis or no crisis, researchers are trying something new. They are repurposing detectors to search for unusual-looking particles, squeezing what they can out of the data with machine learning, and planning for entirely new kinds of colliders. The hidden particles that physicists are looking for have proved more elusive than many expected, but the search is not over—nature has just forced them to get more creative. 

An almost-complete theory

As the Eameses were finishing Powers of Ten in the late ’70s, particle physicists were bringing order to a “zoo” of particles that had been discovered in the preceding decades. Somewhat drily, they called this framework, which enumerated the kinds of particles and their dynamics, the Standard Model.

Roughly speaking, the Standard Model separates fundamental particles into two types: fermions and bosons. Fermions are the bricks of matter—two kinds of fermions called up and down quarks, for example, are bound together into protons and neutrons. If those protons and neutrons glom together and find an electron (or electrons) to orbit them, they become an atom. Bosons, on the other hand, are the mortar between the bricks. Bosons are responsible for all the fundamental forces besides gravity: electromagnetism; the weak force, which is involved in radioactive decay; and the strong force, which binds nuclei together. To transmit a force between one fermion and another, there must be a boson to act as a messenger. For example, quarks feel the attractive power of the strong force because they send and receive bosons called gluons. 

The Standard Model

This framework unites three out of four fundamental forces and tamed an unruly zoo into just 17 elementary particles.

Quarks are bound together by gluons. They form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei.

Leptons can be charged or neutral. The charged leptons are the electron, muon, and tau. Each of these has a neutral neutrino counterpart.

Gauge bosons convey forces. Gluons carry the strong force; photons carry the electromagnetic force; and W and Z bosons carry the weak force, which is involved in radioactive processes.

The Higgs boson is the fundamental particle associated with the Higgs field, a field that permeates the entire universe and gives mass to other fundamental particles.

Nearly 50 years later, the Standard Model remains superbly successful; even under stress tests, it correctly predicts fundamental properties of the universe, like the magnetic properties of the electron and the mass of the Z boson, to extremely high accuracy. It can reach well past where Powers of Ten left off, to the scale of 10-20 meters, roughly a 10,000th the size of a proton. “It’s remarkable that we have a correct model for how the world works down to distances of 10-20 meters. It’s mind blowing,” says Seth Koren, a theorist at the University of Notre Dame, in Indiana. 

Despite its accuracy, physicists have their pick of questions the Standard Model doesn’t answer—what dark matter actually is, why matter dominates over antimatter when they should have been made in equal amounts in the early universe, and how gravity fits into the picture. 

Over the years, thousands of papers have suggested modifications to the Standard Model to address these open questions. Until recently, most of these papers relied on the concept of supersymmetry, abbreviated to the friendlier “SUSY.” Under SUSY, fermions and bosons are actually mirror images of one another, so that every fermion has a boson counterpart, and vice versa. The photon would have a superpartner dubbed a “photino” in SUSY parlance, while an electron would have a “selectron.” If these particles were high in mass, they would be “hidden,” unseen unless a sufficiently high-energy collision left them as debris. In other words, to create these heavy superpartners, physicists needed a powerful particle collider.

It might seem strange, and overly complicated, to double the number of particles in the universe without direct evidence. SUSY’s appeal was in its elegant promise to solve two tricky problems. First, superpartners would explain the Higgs boson’s oddly low mass. The Higgs is about 100 times more massive than a proton, but the math suggests it should be 100 quadrillion times more massive. (SUSY’s quick fix is this: every particle that interacts with the Higgs contributes to its mass, causing it to balloon. But each superpartner would counteract its ordinary counterpart’s contribution, getting the mass of the Higgs under control.) The second promise of SUSY: those hidden particles would be ideal candidates for dark matter. 

SUSY was so nifty a fix to the Standard Model’s problems that plenty of physicists thought they would find superpartners before they found the Higgs boson when the LHC began taking data in 2010. Instead, there has been resounding silence. Not only has there been no evidence for SUSY, but many of the most promising scenarios where SUSY particles would solve the problem of the Higgs mass have been ruled out.

At the same time, many non-collider experiments designed to directly detect the kind of dark matter you’d see if it were made up of superpartners have come up empty. “The lack of evidence from both direct detection and the LHC is a really strong piece of information the field is still kind of digesting,” Kahn says. 

Inside a part of the high-luminosity LHC (HL-LHC)
project at CERN where civil engineering work has been completed. The upgrade, which is set to be completed by the end of the 2020s, will send more protons into the collider’s beams, creating more collisions and thus more data.
SAMUELJOSEPH HERTZOG/CERN

Many younger researchers—like Sam Homiller, a theorist at Harvard University—are less attached to the idea. “[SUSY] would have been a really pretty story,” says Homiller. “Since I came in after it … it’s just kind of like this interesting history.” 

Some theorists are now directing their search away from particle accelerators and toward other sources of hidden particles. Masha Baryakhtar, a theorist at the University of Washington, uses data from stars and black holes. “These objects are really high density, often high temperature. And so that means that they have a lot of energy to give up to create new particles,” Baryakhtar says. In their nuclear furnaces, stars might produce loads and loads of another dark matter candidate called the axion. There are experiments on Earth that aim to detect such particles as they reach us. But if a star is expending energy to create axions, there will also be telltale signs in astronomical observations. Baryakhtar hopes these celestial bodies will be a useful complement to detectors on Earth. 

Other researchers are finding ways to give new life to old ideas like SUSY. “I think SUSY is wonderful—the only thing that’s not wonderful is that we haven’t found it,” quips Karri DiPetrillo, an experimentalist at the University of Chicago. She points out that SUSY is far from being ruled out. In fact, some promising versions of SUSY that account for dark matter (but not the Higgs mass) are completely untested. 

After initial investigations did not find SUSY in the most obvious places, many researchers began looking for “long-lived particles” (LLPs), a generic class of potential particles that includes many possible superpartners. Because detectors are primarily designed to see particles that decay immediately, spotting LLPs challenges researchers to think creatively. 

“You need to know the details of the experiment that you’re working on in a really intimate way,” DiPetrillo says. “That’s the dream—to really be using your experiment and pushing it to the max.”

The two general-purpose detectors at the LHC, ATLAS and CMS, are a bit like onions, with concentric layers of particle-­tracking hardware. Most of the initial mess from proton collisions—jets and showers of quarks—decays immediately and gets absorbed by the inner layers of the onion. The outermost layer of the detector is designed to spot the clean, arcing paths of muons, which are heavier versions of electrons. If an LLP created in the collision made it to the muon tracker and then decayed, the particle trajectory would be bizarre, like a baseball hit from first base instead of home plate. A recent search by the CMS collaboration used this approach to search for LLPs but didn’t spot any evidence for them. 

Researchers scouring the data often don’t have any faith that any particular search will turn up new physics, but they feel a responsibility to search all the same. “We should do everything in our power to make sure we leave no stone unturned,” DiPetrillo says. “The worst thing about the LHC would be if we were producing SUSY particles and we didn’t find them.” 

Needles in high-energy haystacks

Searching for new particles isn’t just a matter of being creative with the hardware; it’s also a software problem. While it’s running, the LHC generates about a petabyte of collision data per second—a veritable firehose of information. Less than 1% of that gets saved, explains Ben Nachman, a data physicist at Lawrence Berkeley National Lab: “We just can’t write a petabyte per second to tape right now.” 

Dealing with that data will only become more important in the coming years as the LHC receives its “high luminosity” upgrade. Starting at the end of the decade, the HL-LHC will operate at the same energy, but it will record about 10 times more data than the LHC has accumulated so far. The boost will come from an increase in beam density: stuffing more protons into the same space leads to more collisions, which translates to more data. As the frame fills with dozens of collisions, the detector begins to look like a Jackson Pollock painting, with splashes of particles that are impossible to disentangle.

To handle the increasing data load and search for new physics, particle physicists are borrowing from other disciplines, like machine learning and math. “There’s a lot of room for creativity and exploration, and really just kind of thinking very broadly,” says Jessica Howard, a phenomenologist at the University of California, Santa Barbara. 

One of Howard’s projects involves applying optimal transport theory, an area of mathematics concerned with moving stuff from one place to the next, to particle detection. (The field traces its roots to the 18th century, when the French mathematician Gaspard Monge was thinking about the optimal way to excavate earth and move it.) Conventionally, the “shape” of a particle collision—roughly, the angles at which the particles fly out—has been described by simple variables. But using tools from optimal transport theory, Howard hopes to help detectors be more sensitive to new kinds of particle decays that have unusual shapes, and better able to handle the HL-LHC’s higher rates of collisions.

As with many new approaches, there are doubts and kinks to work out. “It’s a really cute idea, but I have no idea what it’s useful for at the moment,” Nachman says of optimal transport theory. He is a proponent of novel machine-learning approaches, some of which he hopes will allow researchers to do entirely different kinds of searches and “look for patterns that we couldn’t have otherwise found.”

Though particle physicists were early adopters and have been using machine learning since the late 1990s, the past decade of advances in deep learning has dramatically changed the landscape. 

Packing more power

The energy of particle colliders (as measured by the combined energy of two colliding particles) has risen over the decades, opening up new realms of physics to explore.

A bubble chart showing the GeV of 32 colliders from 1960 to proposed colliders in 2050.

Collisions between leptons, such as electrons and positrons, are efficient and precise, but limited in energy. Among potential future projects is the possibility of colliding muons, which would give a big jump in collision energy.

Collisions between hadrons, such as protons and antiprotons, have high energy but limited precision. Although it would start with electrons (rightmost point), a possible Future Circular Collider could reach 100,000 (105) GeV by colliding protons.

“[Machine learning] can almost always improve things,” says Javier Duarte, an experimentalist at the University of California, San Diego. In a hunt for needles in haystacks, the ability to change the signal-to-noise ratio is crucial. Unless physicists can figure out better ways to search, more data might not help much—it might just be more hay. 

One of the most notable but understated applications for this kind of work is refining the picture of the Higgs. About 60% of the time, the Higgs boson decays into a pair of bottom quarks. Bottom quarks are tricky to find amid the mess of debris in the detectors, so researchers had to study the Higgs through its decays into an easy-to-spot photon pair, even though that happens only about 0.2% of the time. But in the span of a few years, machine learning has dramatically improved the efficiency of bottom-quark tagging, which allows researchers another way to measure the Higgs boson. “Ten years ago, people thought this was impossible,” Duarte says. 

The Higgs boson is of central importance to physicists because it can tell them about the Higgs field, the phenomenon that gives mass to all the other elementary particles. Even though some properties of the Higgs boson have been well studied, like its mass, others—like the recursive way it interacts with itself—remain unknown with any kind of precision. Measuring those properties could rule out (or confirm) theories about dark matter and more. 

What’s truly exciting about machine learning is its potential for a completely different class of searches called anomaly detection. “The Higgs is kind of the last thing that was discovered where we really knew what we were looking for,” Duarte says. Researchers want to use machine learning to find things they don’t know to look for.

In anomaly detection, researchers don’t tell the algorithm what to look for. Instead, they give the algorithm data and tell it to describe the data in as few bits of information as possible. Currently, anomaly detection is still nascent and hasn’t resulted in any strong hints of new physics, but proponents are eager to try it out on data from the HL-LHC. 

Because anomaly detection aims to find anything that is sufficiently out of place, physicists call this style of search “model agnostic”—it doesn’t depend on any real assumptions. 

Not everyone is fully on board. Some theorists worry that the approach will only yield more false alarms from the collider—more tentative blips in the data like “two-sigma bumps,” so named for their low level of statistical certainty. These are generally flukes that eventually disappear with more data and analysis. Koren is concerned that this will be even more the case with such an open-ended technique: “It seems they want to have a machine that finds more two-sigma bumps at the LHC.” 

Nachman told me that he received a lot of pushback; he says one senior physicist told him, “If you don’t have a particular model in mind, you’re not doing physics.” Searches based on specific models, he says, have been amazingly productive—he points to the discovery of the Higgs boson as a prime example—but they don’t have to be the end of the story. “Let the data speak for themselves,” he says.

Building bigger machines

One thing particle physicists would really like in the future is more precision. The problem with protons is that each one is actually a bundle of quarks. Smashing them together is like a subatomic food fight. Ramming indivisible particles like electrons (and their antiparticles, positrons) into one another results in much cleaner collisions, like the ones that take place on a pool table. Without the mess, researchers can make far more precise measurements of particles like the Higgs. 

An electron-positron collider would produce so many Higgs bosons so cleanly that it’s often referred to as a “Higgs factory.” But there are currently no electron-­positron colliders that have anywhere near the energies needed to probe the Higgs. One possibility on the horizon is the Future Circular Collider (FCC). It would require digging an underground ring with a circumference of 55 miles (90 kilometers)—three times the size of the LHC—in Switzerland. That work would likely cost tens of billions of dollars, and the collider would not turn on until nearly 2050. There are two other proposals for nearer-term electron-positron colliders in China and Japan, but geopolitics and budgetary issues, respectively, make them less appealing prospects. 

A snapshot of simulated particle tracks inside
a muon collider. The simulation suggests it’s
possible to reconstruct information about the
Higgs boson from the bottom quarks (red dots) it decays into, despite the noisy environment.
D. LUCCHESI ET AL.

Physicists would also like to go to higher energies. “The strategy has literally never failed us,” Homiller says. “Every time we’ve gone to higher energy, we’ve discovered some new layer of nature.” It will be nearly impossible to do so with electrons; because they have such a low mass, they radiate away about a trillion times more energy than protons every time they loop around a collider. But under CERN’s plan, the FCC tunnel could be repurposed to collide protons at energies eight times what’s possible in the LHC—about 50 years from now. “It’s completely scientifically sound and great,” Homiller says. “I think that CERN should do it.” 

Could we get to higher energies faster? In December, the alliteratively named Particle Physics Project Prioritization Panel (P5) put forward a vision for the near future of the field. In addition to addressing urgent priorities like continued funding for the HL-LHC upgrade and plans for telescopes to study the cosmos, P5 also recommended pursuing a “muon shot”—an ambitious plan to develop technology to collide muons. 

The idea of a muon collider has tantalized physicists because of its potential to combine both high energies and—since the particles are indivisible—clean collisions. It seemed well out of reach until recently; muons decay in just 2.2 microseconds, which makes them extremely hard to work with. Over the past decade, however, researchers have made strides, showing that, among other things, it should be possible to manage the roiling cloud of energy caused by decaying muons as they’re accelerated around the machine. Advocates of a muon collider also tout its smaller size (10 miles), its faster timeline (optimistically, as early as 2045), and the possibility of a US site (specifically, Fermi National Laboratory, about 50 miles west of Chicago).

There are plenty of caveats: a muon collider still faces serious technical, financial, and political hurdles—and even if it is built, there is no guarantee it will discover hidden particles. But especially for younger physicists, the panel’s endorsement of muon collider R&D is more than just a policy recommendation; it is a bet on their future. “This is exactly what we were hoping for,” Homiller says. “This opens a pathway to having this exciting, totally different frontier of particle physics in the US.” It’s a frontier he and others are keen to explore. 

Dan Garisto is a freelance physics journalist based in New York City.

This Chinese city wants to be the Silicon Valley of chiplets

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Last month, MIT Technology Review unveiled our pick for 10 Breakthrough Technologies of 2024. These are the technological advancements that we believe will change our lives today or sometime in the future. Among them, there is one that specifically matters to the Chinese tech sector: chiplets.

That’s what I wrote about in a new story today. Chiplets—the new chipmaking approach that breaks down chips into independent modules to reduce design costs and improve computing performance—can help China develop more powerful chips despite US government sanctions that prevent Chinese companies from importing certain key technologies.

Outside China, chiplets are one of the alternative routes that the semiconductor industry could take to improve chip performance cost-effectively. Instead of endlessly trying to cram more transistors into one chip, the chiplet approach proposes that the functions of a chip can be separated into several smaller devices, and each component could be easier to make than a powerful single-piece chip. Companies like Apple and Intel have already made commercial products this way. 

But within China, the technology takes on a different level of significance. US sanctions mean that Chinese companies can’t purchase the most advanced chips or the equipment to make them, so they have to figure out how to maximize the technologies they have. And chiplets come in handy here: if the companies can make each chiplet to the most advanced level they are capable of and assemble these chiplets into a system, it can act as a substitute for more powerful cutting-edge chips.

The technology needed to make chiplet is not that new. Huawei, the Chinese tech giant that has a chip-design subsidiary called HiSilicon, experimented with its first chiplet design product in 2014. But the technology became more important to the company after it was subject to strict sanctions from the US in 2019 and couldn’t work with foreign factories anymore. In 2022, Huawei’s then chairman, Guo Ping, said the company was hoping to connect and stack up less advanced chip modules to keep the products competitive in the market. 

Currently, there’s a lot of money going into the chiplet space. The Chinese government and investors have recognized the importance of chiplets, and they are pouring funding into academic projects and startups.

Particularly, there’s one Chinese city that has gone all-in on chiplets, and you very likely have never heard its name: Wuxi (pronounced woo-she). 

Halfway between Shanghai and Nanjing, Wuxi is a medium-sized city with a strong manufacturing industry. And it has a long history in the semiconductor sector: the Chinese government built a state-owned wafer factory there in the ’60s. And when the government decided to invest in the semiconductor industry by 1989, 75% of the state budget went into the factory in Wuxi.

By 2022, Wuxi had over 600 chip companies and was behind only Shanghai and Beijing in terms of semiconductor industry competitiveness. Particularly, Wuxi is the center of chip packaging—the final steps in the assembly process, like integrating the silicon part with its plastic case and testing the chip’s performance. JCET, the third-largest chip packaging company in the world and the largest of its kind in China, was founded in Wuxi more than five decades ago.

Their prominence in the packaging sector gives JCET and Wuxi an advantage in chiplets. Compared with traditional chips, chiplets are more accommodating of less-advanced manufacturing capabilities, but they require more sophisticated packaging techniques to ensure that different modules can work together seamlessly. So Wuxi’s established strength in packaging means it can be one step ahead of other cities in developing chiplets.

In 2023, Wuxi announced its plan to become the “Chiplet Valley.” The city has pledged to spend $14 million to subsidize companies that develop chiplets in the region, and it has formed the Wuxi Institute of Interconnect Technology to focus research efforts on chiplets. 

Wuxi is a great example of China’s hidden role in the global semiconductor industry: relative to sectors like chip design and manufacturing, packaging is labor intensive and not as desirable. That’s why there’s basically no packaging capability left in Western countries, and why places like Wuxi usually fly under everyone’s radar.

But with the opportunity presented by chiplets, as well as other advancements in packaging techniques, there’s a chance for chip packaging to enter center stage again. And China is betting on that possibility heavily right now to leverage one of its few domestic strengths to get ahead in the semiconductor industry.

Have you heard of Wuxi? Do you think it will play a more important role in the global semiconductor supply chain in the future? Let me know your thoughts at zeyi@technologyreview.com.

Catch up with China

1. TikTok’s CEO, Shou Zi Chew, testified in front of the US Senate on social media’s exploitation of children, along with the CEOs of Meta, Twitter, Snap, and Discord. (Associated Press)

2. Mayors from the US heartland are being invited to visit China as the country hopes to find local support outside Washington politics. (Washington Post $)

3. A new class action lawsuit is suing the genetic testing company 23andMe for a data breach that seems to have targeted people with Chinese and Ashkenazi Jewish heritage. (New York Times $)

4. Tesla is opening a new battery plant in Nevada, with manufacturing equipment bought from China’s battery giant CATL. (Bloomberg $)

5. A new Chinese documentary shows the everyday lives of ordinary blue-collar workers by stitching together 887 short videos shot by themselves on their mobile phones. (Sixth Tone)

6. Baidu’s venture capital arm is planning to sell its stakes in US startups, as the US-China investment environment has become much more politically sensitive. (The Information $)

7. Huawei and China’s biggest chipmaker, SMIC, could start making five-nanometer chips—still one generation behind the most advanced chips today—as early as this year. (Financial Times $

8. A pigeon was detained in India for eight months, suspected of carrying spy messages for China. It turns out it’s an open-water racing bird from Taiwan. (Associated Press)

Lost in translation

Shanghai’s attempt to ban ride-hailing services from picking up passengers near the Pudong Airport lasted exactly one week before it was called off. From January 29 on, Chinese ride-hailing apps like Didi all stopped servicing users in the Shanghai airport area at the request of the local transportation department, according to the Chinese newspaper Southern Metropolis Daily. While traditional taxis are still allowed at the airport, passengers reported longer wait times and frequent refusals of service by taxi drivers. The raid-hail ban, aimed at ensuring smooth traffic flow during the Spring Festival travel rush, soon faced criticism and legal scrutiny for its suddenness and potential violations of antitrust laws. The situation underscores the ongoing debate over the role of ride-hailing services during peak travel seasons, with some Chinese cities like Shanghai frowning upon them while others have embraced them. In the early hours of February 4, the Shanghai government decided to reverse the ban, and ride-hailing cars were back in the airport area. 

One last thing

Lingyan, a panda living in a zoo in Henan province, could have been the first panda to get suspended on China’s TikTok for … twerking. The zoo hosted a livestream session on January 31, but it was suddenly suspended by the algorithm when Lingyan climbed on top of a dome and started shaking his butt. I don’t know if this means the algorithm is too good at recognizing twerking or too bad at telling pandas from humans.

A panda standing on top of a play den and twerking.

LUANCHUAN ZHUHAI WILDLIFE PARK VIA DOUYIN