Three ways the US could help universities compete with tech companies on AI innovation

The ongoing revolution in artificial intelligence has the potential to dramatically improve our lives—from the way we work to what we do to stay healthy. Yet ensuring that America and other democracies can help shape the trajectory of this technology requires going beyond the tech development taking place at private companies. 

Research at universities drove the AI advances that laid the groundwork for the commercial boom we are experiencing today. Importantly, academia also produced the leaders of pioneering AI companies. 

But today, large foundational models, or LFMs, like ChatGPT, Claude, and Gemini require such vast computational power and such extensive data sets that private companies have replaced academia at the frontier of AI. Empowering our universities to remain alongside them at the forefront of AI research will be key to realizing the field’s long-term potential. This will require correcting the stark asymmetry between academia and industry in access to computing resources.  

Academia’s greatest strength lies in its ability to pursue long-term research projects and fundamental studies that push the boundaries of knowledge. The freedom to explore and experiment with bold, cutting-edge theories will lead to discoveries and innovations that serve as the foundation for future innovation. While tools enabled by LFMs are in everybody’s pocket, there are many questions that need to be answered about them, since they remain a “black box” in many ways. For example, we know AI models have a propensity to hallucinate, but we still don’t fully understand why. 

Because they are insulated from market forces, universities can chart a future where AI truly benefits the many. Expanding academia’s access to resources would foster more inclusive approaches to AI research and its applications. 

The pilot of the National Artificial Intelligence Research Resource (NAIRR), mandated in President Biden’s October 2023 executive order on AI, is a step in the right direction. Through partnerships with the private sector, the NAIRR will create a shared research infrastructure for AI. If it realizes its full potential, it will be an essential hub that helps academic researchers access GPU computational power more effectively. Yet even if the NAIRR is fully funded, its resources are likely to be spread thin. 

This problem could be mitigated if the NAIRR focused on a select number of discrete projects, as some have suggested. But we should also pursue additional creative solutions to get meaningful numbers of GPUs into the hands of academics. Here are a few ideas:

First, we should use large-scale GPU clusters to improve and leverage the supercomputer infrastructure the US government already funds. Academic researchers should be enabled to partner with the US National Labs on grand challenges in AI research. 

Second, the US government should explore ways to reduce the costs of high-end GPUs for academic institutions—for example, by offering financial assistance such as grants or R&D tax credits. Initiatives like New York’s, which make universities key partners with the state in AI development, are already playing an important role at a state level. This model should be emulated across the country. 

Lastly, recent export control restrictions could over time leave some US chipmakers with surplus inventory of leading-edge AI chips. In that case, the government could purchase this surplus and distribute it to universities and academic institutions nationwide.

Imagine the surge of academic AI research and innovation these actions would ignite. Ambitious researchers at universities have a wealth of diverse ideas that are too often stopped short for lack of resources. But supplying universities with adequate computing power will enable their work to complement the research carried out by private industry. Thus equipped, academia can serve as an indispensable hub for technological progress, driving interdisciplinary collaboration, pursuing long-term research, nurturing talent that produces the next generation of AI pioneers, and promoting ethical innovation. 

Historically, similar investments have yielded critical dividends in innovation. The United States of the postwar era cultivated a symbiotic relationship among government, academia, and industry that carried us to the moonseeded Silicon Valley, and created the internet

We need to ensure that academia remains a strong pole in our innovation ecosystem. Investing in its compute capacity is a necessary first step. 

Ylli Bajraktari is CEO of the Special Competitive Studies Project (SCSP), a nonprofit initiative that seeks to strengthen the United States’ long-term competitiveness. 

Tom Mitchell is the Founders University Professor at Carnegie Mellon University. 

Daniela Rus is a professor of electrical engineering and computer science at MIT and director of its Computer Science and Artificial Intelligence Laboratory (CSAIL).

Trump wants to unravel Biden’s landmark climate law. Here is what’s most at risk.

President Joe Biden’s crowning legislative achievement was enacting the Inflation Reduction Act, easily the nation’s largest investment into addressing the rising dangers of climate change. 

Yet Donald Trump’s advisors and associates have clearly indicated that dismantling the landmark law would sit at the top of the Republican front-runner’s to-do list should he win the presidential election. If he succeeds, it could stall the nation’s shift to cleaner industries and stunt efforts to cut the greenhouse-gas pollution warming the planet. 

The IRA unleashes at least hundreds of billions of dollars in federal subsidies for renewable energy sources, electric vehicles, batteries, heat pumps, and more. It is the “backbone” of the Biden administration’s plan to meet the nation’s commitments under the Paris climate agreement, putting the US on track to cut emissions by as much as 42% from 2005 levels by the end of this decade, according to the Rhodium Group, a research firm. 

But the sprawling federal policy package marks the “biggest defeat” conservatives have suffered during Biden’s tenure, according to Myron Ebell, who led the Environmental Protection Agency transition team during Trump’s administration. And repealing the law has become an obsession among many conservatives, including the authors of the Heritage Foundation’s Project 2025, widely seen as a far-right road map for the early days of a second Trump administration. 

The IRA’s tax credits for EVs and clean power projects appear especially vulnerable, climate policy experts say. Losing those provisions alone could reshape the nation’s emissions trajectory, potentially adding back hundreds of millions of metric tons of climate pollution this decade. 

Moreover, Trump’s wide-ranging pledges to weaken international institutions, inflame global trade wars, and throw open the nation’s resources to fossil-fuel extraction could have compounding effects on any changes to the IRA, potentially undermining economic growth, the broader investment climate, and prospects for emerging green industries.

Farewell to EV tax credits

The IRA leverages government funds to accelerate the energy transition through a combination of direct grants and tax credits, which allow companies or individuals to cut their federal obligations in exchange for buying, installing, investing in, or producing cleaner power and products. It is enacted law, not a federal agency regulation or executive order, which means that any substantial changes would need to be achieved through Congress.

But the tax cuts for individuals pushed through during Trump’s time in office are set to expire next year. If he wins a second term, legislators seeking to extend those cuts could crack up the tax code and excise key components of the IRA, particularly if Republicans retain control of the House and pick up seats in the Senate. Eliminating any of those tax credits could help offset the added cost of restoring those Trump-era benefits.

Numerous policy observers believe that the pair of EV tax credits in the IRA, which together lop $7,500 off the cost of electric cars and trucks, would be one of the top targets. Subsidizing the cost of EVs polls terribly among Republicans, and throughout the primaries, most of the party’s candidates for president have fiercely attacked government support for the vehicles—none more than Trump himself. 

Close up of former President Trump pointing directly at camera while speaking at a campaign event in Iowa
Former President Donald Trump speaks at a campaign event in Iowa.
SCOTT OLSON/GETTY IMAGES

On the campaign trail, he has repeatedly, erroneously referred to the policy as a mandate rather than a subsidy, while geographically tailoring the critique to his audience.

At a December rally in Iowa, the nation’s biggest corn producer, he pledged to cancel “Crooked Joe Biden’s insane, ethanol-killing electric-vehicle mandate on day one.”

And in the battleground state of Michigan in September, he pandered to the fears of autoworkers.

“Crooked Joe is siding with the left-wing crazies who will destroy automobile manufacturing and will destroy the country itself,” Trump said. “The damn things don’t go far enough, and they’re too expensive.”

Other Trump targets

Other IRA components likely to fall into Trump’s crosshairs include tax credits for investing in or operating emissions-free power plants that would come online in 2025 or later, says Josh Freed, who leads the climate and energy program at Third Way, a center-left think tank in Washington, DC.

These so-called technology-neutral credits are intended to replace earlier subsidies dedicated to renewables like solar and wind, encompassing a more expansive suite of energy-producing possibilities like nuclear, bioenergy, or power plants with carbon capture capabilities.

Those latter categories are more likely to have Republican support than, say, solar farms. But any policy primarily designed to accelerate the shift away from fossil fuels would likely be a ripe target in a second Trump administration, given the industry’s support for the candidate and his ideological opposition to climate action.

A number of other provisions could also come under attack within the law. Among them:

  • additional measures supporting the growing adoption of EVs, including tax credits for individuals and businesses that install charging infrastructure; 
  • fees on methane emissions from wells, processing plants, and pipelines, when they exceed certain thresholds;
  •  a series of environmental-justice grants and bonus tax credits available for projects that help reduce pollution, provide affordable clean energy, and create jobs in low-income, marginalized areas;
  • a reinstated Superfund excise tax on crude oil and petroleum products, which could raise billions of dollars to fund the cleanup of hazardous-waste sites;
  • and a series of tax credits incentivizing consumers to add solar panels, install heat pumps, and improve the energy efficiency of their homes. 

Pushback

Observers are quick to note, however, that a wholesale repeal of the IRA is unlikely, because—well—it’s working.

By some accounts, the law has helped spur hundreds of billions of dollars in private investment into projects that could create nearly 200,000 jobs—and get this: eight of the 10 congressional districts set to receive the biggest clean-energy investments announced in recent quarters are led by Republicans, according to one analysis (and backed up by others). 

A disproportionate amount of the money is also flowing into low-income areas and “energy communities,” or regions that previously produced fossil fuels, according to data from the MIT Center for Energy and Environmental Policy Research and the Rhodium Group. 

As more and more renewables projects, mineral processing facilities, battery plants, and EV factories bring jobs and tax revenue to red states, the politics around clean energy are shifting, at least behind the scenes if not always in the public debate. 

All of which means some sizable share of Republicans will likely push back on more sweeping changes to the IRA, particularly if they would raise the costs on businesses and reduce the odds that new projects will move forward, says Sasha Mackler, executive director of the energy program at the Bipartisan Policy Center, a Washington, DC, think tank.

“Most of the tax credits are pretty popular within industry and in red states, which are generally the constituency that the Republican Party listens to when they shape their policies,” Mackler says. “When you start to go beyond the top-line political rhetoric and look at the actual tax credits themselves, they’re on much firmer ground than you might initially think just reading the newspaper and looking at what’s being said on the campaign trail.”

That means it might prove more difficult to rescind some of the hit-list items above than Trump would hope. And there are other big parts of the legislative package that Republicans might avoid picking fights over at all, such as the support for processing critical minerals, manufacturing batteries, capturing and storing carbon dioxide, and producing biofuels, given the broader support for these areas.

DC sources also say that clean-energy-focused policy shops and some climate tech companies themselves are already playing defense, stressing the importance of these policies to legislators in the run-up to the election. Meanwhile, if staffers at the Department of Energy and other federal agencies aren’t already rushing to get as much of the grant-based money in the IRA out the door as possible, they should be, says Leah Stokes, an associate professor of environmental politics at the University of California, Santa Barbara, who advised Democrats on crafting the law.

Among other funds, the law appropriates nearly $12 billion for the DOE’s loans office, which provides financing to accelerate the development of clean-energy projects. It also sets aside $5 billion in EPA grants designed to help states, local governments, and tribes implement efforts to cut greenhouse-gas pollution. 

“If DOE and EPA work fast enough, that money should be difficult to somehow claw back, because it will have been spent,” Stokes says.

Impact

Still, there’s no question that Trump and legislators eager to curry his favor could do real damage to the IRA and the clean-energy industries poised to benefit from it.

How much damage depends, of course, on what he succeeds in unraveling.

But take the example of the power sector subsidies. A study last year in the journal Science noted that with the IRA’s support for clean electricity, around 68% of the country’s power generation would come from low-emission sources by 2030, as opposed to 54% without the law. 

The Rhodium Group estimates that the IRA could cut power-sector pollution by nearly 500 million tons in 2030, as a central estimate. 

At an intersection, exhaust pours out of the tailpipes of vehicles.

GETTY IMAGES

How much these projections change would depend on which and how many of the provisions supporting the shift to cleaner power legislators manage to remove. In addition to the technology-neutral credits noted above, the IRA also provides federal support for extending the life of nuclear plants, deploying energy storage, and adding carbon capture and storage capabilities.

Meanwhile, an earlier report from RMI (formerly known as the Rocky Mountain Institute) offered a hint at what’s at stake for the EV sector. The research group noted that the assorted provisions within the IRA, when combined with the EPA’s proposal to tighten tailpipe rules, could propel electric passenger vehicles to 76% of all new sales by 2030. Without it, they will only make up about half such sales by that point. (Notably, however, the Biden administration is now reportedly considering relaxing those rules to give automakers more time to ramp up EV production.)

All told, some 37 million additional EVs could hit the nation’s roads between now and 2032, eliminating more than 830 million tons of transportation emissions by that year and 2.4 billion tons by 2040, RMI estimates.

That adds up to a huge difference in the market prospects for EV makers, and in the economics of building new plants. 

The loss of the EV credits could create another notable ripple effect. For a purchased vehicle to qualify for one of the $3,750 tax credits, at least 60% of the battery components must be manufactured or assembled in North America. The other credit is available only if the batteries include a significant share of critical minerals extracted or processed in the US or through free-trade partners, or recycled in North America.  

The varied goals of these “domestic content requirements,” which helped drive the law past the legislative finish line, included ensuring that the US produces more of materials and components for cleantech industries domestically, creating more jobs, reducing the nation’s reliance on China, and safeguarding US energy security as the country moves away from fossil fuels.

Losing the tax credits could dim hopes for reaching those goals—though some critics argue that trade deals and IRS interpretations have already watered down the credits’ provisions, ensuring that more manufacturers and models qualify.

Trump’s broader agenda

Trump has made clear he intends to hamstring additional climate efforts and bolster the oil and gas sector through numerous other means, potentially including federal regulations, executive orders, and Department of Justice actions. All of these would only magnify any impact from changes he might make to the IRA.

If he wins in November, he’s also likely, for instance, to direct the EPA to eliminate those tailpipe rules altogether. He may work to slow down, cut off, or claw back some of the $7.5 billion allocated under the Bipartisan Infrastructure Law to build out a national EV charging network.

Trump could also remove and refuse to replace the staff necessary to implement and oversee programs and funding throughout the DOE, the EPA, the National Oceanic and Atmospheric Administration, and other federal agencies. And he would very likely pull the US out of the Paris climate agreement again. 

How much of this Trump accomplishes could depend, in part, on how emboldened he feels upon entering office for a second term, when he’d likely still be battling multiple criminal cases against him. 

“It just depends if we assume he’s going to respect the law and color within the lines of our legal system, or if he’s going to be a fascist,” Stokes says. “That’s a huge question—and we should take it very seriously.”

In the end, it may also prove difficult to disentangle the effects of rolling back climate policies from any success he achieves in implementing his broader policy agenda. Trump has pledged to impose a 60% or higher tariff on Chinese goods, as well as a “pro-America system of universal baseline tariffs on most foreign products.” He has said he would encourage Russia to attack NATO allies and is reportedly considering  pulling the US out of the military alliance. He’s discussed deploying military forces to suppress US protests, seal the southern border, and attack drug cartels in Mexico.

The potentially chaotic economic and geopolitical effects of such policies, at a point of spiraling global conflicts, could easily dwarf any direct consequences of altering climate laws and regulations.

As Freed puts it: “A world that is less stable and much more dangerous, economically and militarily, would have incalculable damage on climate and energy issues in a second Trump term.”

And on much else.

What are the hardest problems in tech we should be more focused on as a society?

Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.

Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.



CREDITS

Reporting: MIT Technology Review Staff

Editing: Allison Arieff, Rachel Courtland, Mat Honan, Amy Nordrum

Copy editing: Linda Lowenthal

Fact checking: Matt Mahoney

Art direction: Stephanie Arnett

Chinese ChatGPT alternatives just got approved for the general public

On Wednesday, Baidu, one of China’s leading artificial-intelligence companies, announced it would open up access to its ChatGPT-like large language model, Ernie Bot, to the general public.

It’s been a long time coming. Launched in mid-March, Ernie Bot was the first Chinese ChatGPT rival. Since then, many Chinese tech companies, including Alibaba and ByteDance, have followed suit and released their own models. Yet all of them forced users to sit on waitlists or go through approval systems, making the products mostly inaccessible for ordinary users—a possible result, people suspected, of limits put in place by the Chinese state.

On August 30, Baidu posted on social media that it will also release a batch of new AI applications within the Ernie Bot as the company rolls out open registration the following day. 

Quoting an anonymous source, Bloomberg reported that regulatory approval will be given to “a handful of firms including fledgling players and major technology names.” Sina News, a Chinese publication, reported that eight Chinese generative AI chatbots have been included in the first batch of services approved for public release. 

ByteDance, which released the chatbot Doubao on August 18, and the Institute of Automation at the Chinese Academy of Sciences, which released Zidong Taichu 2.0 in June, are reportedly also included in the first batch. Other models from Alibaba, iFLYTEK, JD, and 360 are not.

When Ernie Bot was released on March 16, the response was a mix of excitement and disappointment. Many people deemed its performance mediocre relative to the previously released ChatGPT. 

But most people simply weren’t able to see it for themselves. The launch event didn’t feature a live demonstration, and later, to actually try out the bot, Chinese users need to have a Baidu account and apply for a use license that could take as long as three months to come through. Because of this, some people who got access early were selling secondhand Baidu accounts on e-commerce sites, charging anywhere from a few bucks to over $100. 

More than a dozen Chinese generative AI chatbots were released after Ernie Bot. They are all pretty similar to their Western counterparts in that they are capable of conversing in text—answering questions, solving math problems (somewhat), writing programming code, and composing poems. Some of them also allow input and output in other forms, like audio, images, data visualization, or radio signals.

Like Ernie Bot, these services came with restrictions for user access, making it difficult for the general public in China to experience them. Some were allowed only for business uses.

One of the main reasons Chinese tech companies limited access to the general public was concern that the models could be used to generate politically sensitive information. While the Chinese government has shown it’s extremely capable of censoring social media content, new technologies like generative AI could push the censorship machine to unknown and unpredictable levels. Most current chatbots like those from Baidu and ByteDance have built-in moderation mechanisms that would refuse to answer sensitive questions about Taiwan or Chinese president Xi Jinping, but a general release to China’s 1.4 billion people would almost certainly allow users to find more clever ways to circumvent censors.

When China released its first regulation specifically targeting generative AI services in July, it included a line requesting that companies obtain “relevant administrative licenses,” though at the time the law didn’t specify what licenses it meant. 

As Bloomberg first reported, the approval Baidu obtained this week was issued by the Chinese Cyberspace Administration, the country’s main internet regulator, and it will allow companies to roll out their ChatGPT-style services to the whole country. But the agency has not officially announced which companies obtained the public access license or which ones have applied for it.

Even with the new access, it’s unclear how many people will use the products. The initial lack of access to Chinese chatbot alternatives decreased public interest in them. While ChatGPT has not been officially released in China, many Chinese people are able to access the OpenAI chatbot by using VPN software.

“Making Ernie Bot available to hundreds of millions of Internet users, Baidu will collect massive valuable real-world human feedback. This will not only help improve Baidu’s foundation model but also iterate Ernie Bot on a much faster pace, ultimately leading to a superior user experience,” said Robin Li, Baidu’s CEO, according to a press release from the company.

Baidu declined to give further comment. ByteDance did not immediately respond to a request for comment from MIT Technology Review.

What happened to the microfinance organization Kiva?

One morning in August 2021, as she had nearly every morning for about a decade, Janice Smith opened her computer and went to Kiva.org, the website of the San Francisco–based nonprofit that helps everyday people make microloans to borrowers around the world. Smith, who lives in Elk River, Minnesota, scrolled through profiles of bakers in Mexico, tailors in Uganda, farmers in Albania. She loved the idea that, one $25 loan at a time, she could fund entrepreneurial ventures and help poor people help themselves.

But on this particular morning, Smith noticed something different about Kiva’s website. It was suddenly harder to find key information, such as the estimated interest rate a borrower might be charged—information that had been easily accessible just the day before and felt essential in deciding who to lend to. She showed the page to her husband, Bill, who had also become a devoted Kiva lender. Puzzled, they reached out to other longtime lenders they knew. Together, the Kiva users combed through blog posts, press releases, and tax filings, but they couldn’t find a clear explanation of why the site looked so different. Instead, they learned about even bigger shifts—shifts that shocked them. 

Kiva connects people in wealthier communities with people in poorer ones through small, crowdfunded loans made to individuals through partner companies and organizations around the world. The individual Kiva lenders earn no interest; money is given to microfinance partners for free, and only the original amount is returned. Once lenders get their money back, they can choose to lend again and again. It’s a model that Kiva hopes will foster a perennial cycle of microfinance lending while requiring only a small outlay from each person.

This had been the nonprofit’s bread and butter since its founding in 2005. But now, the Smiths wondered if things were starting to change.

The Smiths and their fellow lenders learned that in 2019 the organization had begun charging fees to its lending partners. Kiva had long said it offered zero-interest funding to microfinance partners, but the Smiths learned that the recently instituted fees could reach 8%. They also learned about Kiva Capital, a new entity that allows large-scale investors—Google is one—to make big investments in microfinance companies and receive a financial return. The Smiths found this strange: thousands of everyday lenders like them had been offering loans return free for more than a decade. Why should Google now profit off a microfinance investment? 

Combined, Kiva’s top 10 executives made nearly $3.5 million in 2020. In 2021, nearly half of Kiva’s revenue went to staff salaries.

The Kiva users noticed that the changes happened as compensation to Kiva’s top employees increased dramatically. In 2020, the CEO took home over $800,000. Combined, Kiva’s top 10 executives made nearly $3.5 million in 2020. In 2021, nearly half of Kiva’s revenue went to staff salaries.

Considering all the changes, and the eye-popping executive compensation, “the word that kept coming up was ‘shady,’” Bill Smith told me. “Maybe what they did was legal,” he said, “but it doesn’t seem fully transparent.” He and Janice felt that the organization, which relied mostly on grants and donations to stay afloat, now seemed more focused on how to make money than how to create change.

Kiva, on the other hand, says the changes are essential to reaching more borrowers. In an interview about these concerns, Kathy Guis, Kiva’s vice president of investments, told me, “All the decisions that Kiva has made and is now making are in support of our mission to expand financial access.” 

In 2021, the Smiths and nearly 200 other lenders launched a “lenders’ strike.” More than a dozen concerned lenders (as well as half a dozen Kiva staff members) spoke to me for this article. They have refused to lend another cent through Kiva, or donate to the organization’s operations, until the changes are clarified—and ideally reversed.


When Kiva was founded in 2005, by Matt Flannery and Jessica Jackley, a worldwide craze for microfinance—sometimes called microcredit—was at its height. The UN had dubbed 2005 the “International Year of Microcredit”; a year later, in 2006, Muhammad Yunus and the Grameen Bank he had founded in the 1980s won the Nobel Peace Prize for creating, in the words of the Nobel Committee, “economic and social development from below.” On a trip to East Africa, Flannery and Jackley had a lightbulb moment: Why not expand microfinance by helping relatively wealthy individuals in places like the US and Europe lend to relatively poor businesspeople in places like Tanzania and Kenya? They didn’t think the loans Kiva facilitated should come from grants or donations: the money, they reasoned, would then be limited, and eventually run out. Instead, small loans—as little as $25—would be fully repayable to lenders. 

Connecting wealthier individuals to poorer ones was the “peer-to-peer” part of Kiva’s model. The second part—the idea that funding would be sourced through the internet via the Kiva.org website—took inspiration from Silicon Valley. Flannery and another Kiva cofounder, Premal Shah, both worked in tech—Flannery for TiVo, Shah for PayPal. Kiva was one of the first crowdfunding platforms, launched ahead of popular sites like GoFundMe. 

But Kiva is less direct than other crowdfunding sites. Although lenders “choose” borrowers through the website, flipping through profiles of dairy farmers and fruit sellers, money doesn’t go straight to them. Instead, the loans that pass through Kiva are bundled together and sent to one of the partnering microfinance institutions. After someone in the US selects, say, a female borrower in Mongolia, Kiva funds a microfinance organization there, which then lends to a woman who wants to set up a business.

Even though the money takes a circuitous route, the premise of lending to an individual proved immensely effective. Stories about Armenian bakers and Moroccan bricklayers helped lenders like the Smiths feel connected to something larger, something with purpose and meaning. And because they got their money back, while the feel-good rewards were high, the stakes were low. “It’s not charity,” the website still emphasizes today. “It’s a loan.” The organization covered its operating expenses with funding from the US government and private foundations and companies, as well as donations from individual lenders, who could add a tip on top of their loan to support Kiva’s costs.

This sense of individual connection and the focus on facilitating loans rather than donations was what initially drew Janice Smith. She first heard of microfinance through Bill Clinton’s book Giving, and then again through Oprah Winfrey—Kiva.org was included as one of “Oprah’s Favorite Things” in 2010. Smith was particularly enticed by the idea that she could re-lend the same $25 again and again: “I loved looking through borrower profiles and feeling like I was able to help specific people. Even when I realized that the money was going to a [microfinance lender]”—not directly to a borrower—“it still gave me a feeling of a one-on-one relationship with this person.”

Kiva’s easy-to-use website and focus on repayments helped further popularize the idea of small loans to the poor. For many Americans, if they’ve heard of microfinance at all, it’s because they or a friend or family member have lent through the platform. As of 2023, according to a Kiva spokesperson, 2.4 million people from more than 190 countries have done so, ultimately reaching more than 5 million borrowers in 95 countries. The spokesperson also pointed to a 2022 study of 18,000 microfinance customers, 88% of whom said their quality of life had improved since accessing a loan or another financial service. A quarter said the loans and other services had increased their ability to invest and grow their business. 


But Kiva has also long faced criticism, especially when it comes to transparency. There was the obvious issue that the organization suggests a direct connection between Kiva.org users and individual borrowers featured on the site, a connection that does not actually exist. But there were also complaints that the interest rates borrowers pay were not disclosed. Although Kiva initially did not charge fees to the microfinance institutions it funneled money through, the loans to the individual borrowers do include interest. The institutions Kiva partners with use that to cover operational costs and, sometimes, make a profit. 

Critics were concerned about this lack of disclosure given that interest rates on microfinance loans can reach far into the double digits—for more than a decade, some have even soared above 100%. (Microlenders and their funders have long argued that interest rates are needed to make funding sustainable.) A Kiva spokesperson stressed that the website now mentions “average cost to borrower,” which is not the interest rate a borrower will pay but a rough approximation. Over the years, Kiva has focused on partnering with “impact-first” microfinance lenders—those that charge low interest rates or focus on loans for specific purposes, such as solar lights or farming. 

Critics also point to studies showing that microfinance has a limited impact on poverty, despite claims that the loans can be transformative for poor people. For those who remain concerned about microfinance overall, the clean, easy narrative Kiva promotes is a problem. By suggesting that someone like Janice Smith can “make a loan, change a life,” skeptics charge, the organization is effectively whitewashing a troubled industry accused of high-priced loans and harsh collection tactics that have reportedly led to suicides, land grabs, and a connection to child labor and indebted servitude.


Over her years of lending through Kiva.org, Smith followed some of this criticism, but she says she was “sucked in” from her first loan. She was so won over by the mission and the method that she soon became, in her words, a “Kivaholic.” Lenders can choose to join “teams” to lend together, and in 2015 she launched one, called Together for Women. Eventually, the team would include nearly 2,500 Kiva lenders—including one who, she says, put his “whole retirement” into Kiva, totaling “millions of dollars.” 

Smith soon developed a steady routine. She would open her computer first thing in the morning, scroll through borrowers, and post the profiles of those she considered particularly needy to her growing team, encouraging support from other lenders. In 2020, several years into her “Kivaholicism,” Kiva invited team captains like her to join regular calls with its staff, a way to disseminate information to some of the most active members. At first, these calls were cordial. But in 2021, as lenders like Smith noticed changes that concerned them, the tone of some conversations changed. Lenders wanted to know why the information on Kiva’s website seemed less accessible. And then, when they didn’t get a clear answer, they pushed on everything else, too: the fees to microfinance partners, the CEO salaries. 

In 2021 Smith’s husband, Bill, became captain of a new team calling itself Lenders on Strike, which soon had nearly 200 concerned members. The name sent a clear message: “We’re gonna stop lending until you guys get your act together and address the stuff.” Even though they represented a small fraction of those who had lent through Kiva, the striking members had been involved for years, collectively lending millions of dollars—enough, they thought, to get Kiva’s attention. 

On the captains’ calls and in letters, the strikers were clear about a top concern: the fees now charged to microfinance institutions Kiva works with. Wouldn’t the fees make the loans more expensive to the borrowers? Individual Kiva.org lenders still expected only their original money back, with no return on top. If the money wasn’t going to them, where exactly would it be going?

On one call, the Smiths recall, staffers explained that the fees were a way for Kiva to expand. Revenue from the fees—potentially millions of dollars—would go into Kiva’s overall operating budget, covering everything from new programs to site visits to staff salaries. 

screenshot from Kiva website that reads,

Some lenders were disappointed to learn that loans don’t go directly to the borrowers featured on Kiva’s website. Instead, they are pooled together with others’ contributions and sent to partner institutions to distribute.

But on a different call, Kiva’s Kathy Guis acknowledged that the fees could be bad for poor borrowers. The higher cost might be passed down to them; borrowers might see their own interest rates, sometimes already steep, rise even more. When I spoke to Guis in June 2023, she told me those at Kiva “haven’t observed” a rise in borrowers’ rates as a direct result of the fees. Because the organization essentially acts as a middleman, it would be hard to trace this. “Kiva is one among a number of funding sources,” Guis explained—often, in fact, a very small slice of a microlender’s overall funding. “And cost of funds is one among a number of factors that influence borrower pricing.” A Kiva spokesperson said the average fee is 2.53%, with fees of 8% charged on only a handful of “longer-­term, high-risk loans.”

The strikers weren’t satisfied: it felt deeply unfair to have microfinance lenders, and maybe ultimately borrowers, pay for Kiva’s operations. More broadly, they took issue with new programs the revenue was being spent on. Kiva Capital, the new return-seeking investment arm that Google has participated in, was particularly concerning. Several strikers told me that it seemed strange, if not unethical, for an investor like Google to be able to make money off microfinance loans when everyday Kiva lenders had expected no return for more than a decade—a premise that Kiva had touted as key to its model. 

A Kiva spokesperson told me investors “are receiving a range of returns well below a commercial investor’s expectations for emerging-market debt investments,” but did not give details. Guis said that thanks in part to Kiva Capital, Kiva “reached 33% more borrowers and deployed 33% more capital in 2021.” Still, the Smiths and other striking lenders saw the program less as an expansion and more as a departure from the Kiva they had been supporting for years. 

Another key concern, strikers told me, is Kiva US, a separate program that offers zero-interest loans to small businesses domestically. Janice Smith had no fundamental problem with the affordable rates, but she found it odd that an American would be offered 0% interest while borrowers in poorer parts of the world were being charged up to 70%, according to the estimates posted on Kiva’s website. “I don’t see why poor people in Guatemala should basically be subsidizing relatively rich people here in Minnesota,” she told me. Guis disagreed, telling me, “I take issue with the idea that systematically marginalized communities in the US are less deserving.” She said that in 2022, nearly 80% of the businesses that received US loans were “owned by Black, Indigenous, and people of color.”

After months of discussions, the strikers and Kiva staff found themselves at loggerheads. “They feel committed to fees as a revenue source, and we feel committed to the fact that it’s inappropriate,” Bill Smith told me. Guis stressed that Kiva had gone through many changes throughout its 18 years—the fees, Kiva Capital, and Kiva US being just a few. “You have to evolve,” she said.


The fees and the returns-oriented Kiva Capital felt strange enough. But what really irked the Lenders on Strike was how much Kiva executives were being paid for overseeing those changes. Lenders wanted to know why, according to Kiva’s tax return, roughly $3.5 million had been spent on executive compensation in 2020—nearly double the amount a few years previously. Bill Smith and others I spoke to saw a strong correlation: at the same time Kiva was finding new ways to make money, Kiva’s leadership was bringing home more cash. 

The concerned lenders weren’t the only ones to see a connection. Several employees I spoke to pointed to questionable decisions made under the four-year tenure of Neville Crawley, who was named CEO in 2017 and left in 2021. Crawley made approximately $800,000 in 2020, his last full year at the organization, and took home just under $750,000 in 2021, even though he left the position in the middle of the year. When I asked Kathy Guis why Crawley made so much for about six months of work, she said she couldn’t answer but would pass that question along to the board. 

Afterward, I received a written response that did not specifically address CEO compensation, instead noting in part, “As part of Kiva’s commitment to compensation best practices, we conduct regular org-wide compensation fairness research, administer salary surveys, and consult market data from reputable providers.” Chris Tsakalakis, who took over from Crawley, earned more than $350,000 in 2021, for about half a year of work. (His full salary and that of Vishtal Ghotge, his successor and Kiva’s newest CEO, are not yet publicly available in Kiva’s tax filings, nor would Kiva release these numbers to us when we requested them.) In 2021, nearly $20 million of Kiva’s $42 million in revenue went to salaries, benefits, and other compensation. 

According to the striking lenders, Kiva’s board explained that as a San Francisco–based organization, it needed to attract top talent in a field, and a city, dominated by tech, finance, and nonprofits. The last three CEOs have had a background in business and/or tech; Kiva’s board is stacked with those working at the intersection of tech, business, and finance and headed by Julie Hanna, an early investor in Lyft and other Silicon Valley companies. This was especially necessary, the board argued, as Kiva began to launch new programs like Kiva Capital, as well as Protocol, a blockchain-­enabled credit bureau launched in Sierra Leone in 2018 and then closed in 2022.

Someone taking home nearly a million dollars a year was steering the ship, not the lenders and their $25 loans.

The Smiths and other striking lenders didn’t buy the rationale. The leaders of other microlenders—including Kiva partners—make far less. For example, the president and CEO of BRAC USA, a Kiva partner and one of the largest nonprofits in the world, made just over $300,00 in 2020—not only less than what Kiva’s CEO earns, but also below what Kiva’s general counsel, chief investment officer, chief strategy officer, executive vice president of engineering, and chief officer for strategic partnerships were paid in 2021, according to public filings. Julie Hanna, the executive chair of Kiva’s board, made $140,000 for working 10 hours a week in 2021. Premal Shah, one of the founders, took home roughly $320,000 as “senior consultant” in 2020.

Even among other nonprofits headquartered in expensive American cities, Kiva’s CEO salary is high. For example, the head of the Sierra Club, based in Oakland, made $500,000 in 2021. Meanwhile, the executive director of Doctors Without Borders USA, based in New York City, had a salary of $237,000 in 2020, the same year that the Kiva top executive made roughly $800,000—despite 2020 revenue of $558 million, compared with Kiva’s $38 million.

The striking lenders kept pushing—on calls, in letters, on message boards—and the board kept pushing back. They had given their rationale, about the salaries and all the other changes, and as one Kiva lender told me, it was clear “there would be no more conversation.” Several strikers I spoke to said it was the last straw. This was, they realized, no longer their Kiva. Someone taking home nearly a million dollars a year was steering the ship, not them and their $25 loans.


The Kiva lenders’ strike is concentrated in Europe and North America. But I wanted to understand how the changes, particularly the new fees charged to microfinance lenders, were viewed by the microfinance organizations Kiva works with. 

So I spoke to Nurhayrah Sadava, CEO of VisionFund Mongolia, who told me she preferred the fees to the old Kiva model. Before the lending fees were introduced, money was lent from Kiva to microfinance organizations in US dollars. The partner organizations then paid the loan back in dollars too. Given high levels of inflation, instability, and currency fluctuations in poorer countries, that meant partners might effectively pay back more than they had taken out. 

But with the fees, Sadava told me, Kiva now took on the currency risk, with partners paying a little more up front. Sadava saw this as a great deal, even if it looked “shady” to the striking lenders. What’s more, the fees—around 7% to 8% in the case of VisionFund Mongolia—were cheaper than the organization’s other options: their only alternatives were borrowing from microfinance investment funds primarily based in Europe, which charged roughly 20%, or another VisionFund Mongolia lender, which charges the organization 14.5%. 

Sadava told me that big international donors aren’t interested in funding their microfinance work. Given the context, VisionFund Mongolia was happy with the new arrangement. Sadava says the relatively low cost of capital allowed them to launch “resourcefulness loans” for poor businesswomen, who she says pay 3.4% a month. 

VisionFund Mongolia’s experience isn’t necessarily representative—it became a Kiva partner after the fees were instituted, and it works in a country where it is particularly difficult to find funding. Still, I was surprised by how resoundingly positive Sadava was about the new model, given the complaints I’d heard from dozens of aggrieved Kiva staffers and lenders. That got me thinking about something Hugh Sinclair, a longtime microfinance staffer and critic, told me a few years back: “The client of Kiva is the American who gets to feel good, not the poor person.”


In a way, by designing the Kiva.org website primarily for the Western funder, not the faraway borrower, Kiva created the conditions for the lenders’ strike. 

For years, Kiva has encouraged the feeling of a personal connection between lenders and borrowers, a sense that through the organization an American can alter the trajectory of a life thousands of miles away. It’s not surprising, then, that the changes at Kiva felt like an affront. (One striker cried when he described how much faith he had put into Kiva, only for Kiva to make changes he saw as morally compromising.) They see Kiva as their baby. So they revolted.

By designing the Kiva.org website primarily for the Western funder, not the faraway borrower, Kiva created the conditions for the lenders’ strike.

Kiva now seems somewhat in limbo. It’s still advertising its old-school, anyone-­can-be-a-lender model on Kiva.org, while also making significant operational changes (a private investing arm, the promise of blockchain-enabled technology) that are explicitly inaccessible to everyday Americans—and employing high-flying CEOs with CVs and pedigrees that might feel distant, if not outright off-putting, to them. If Kiva’s core premise has been its accessibility to people like the Smiths, it is now actively undermining that premise, taking a chance that expansion through more complicated means will be better for microfinance than honing the simplistic image it’s been built on. 

Several of the striking lenders I spoke to were primarily concerned that the Kiva model had been altered into something they no longer recognized. But Janice Smith, and several others, had broader concerns: not just about Kiva, but about the direction the whole microfinance sector was taking. In confronting her own frustrations with Kiva, Smith reflected on criticisms she had previously dismissed. “I think it’s an industry where, depending on who’s running the microfinance institution and the interaction with the borrowers, it can turn into what people call a ‘payday loan’ sort of situation,” she told me. “You don’t want people paying 75% interest and having debt collectors coming after them for the rest of their lives.” Previously, she trusted that she could filter out the most predatory situations through the Kiva website, relying on information like the estimated interest rate to guide her decisions. As information has become harder to come by, she’s had a harder time feeling confident in the terms the borrowers face. 

In January 2022, Smith closed the 2,500-strong Together for Women group and stopped lending through Kiva. Dozens of other borrowers, her husband included, have done the same. 

While these defectors represent a tiny fraction of the 2 million people who have used the website, they were some of its most dedicated lenders: of the dozen I spoke to, nearly all had been involved for nearly a decade, some ultimately lending tens of thousands of dollars. For them, the dream of “make a loan, change a life” now feels heartbreakingly unattainable. 

Smith calls the day she closed her team “one of the saddest days of my life.” Still, the decision felt essential: “I don’t want to be one of those people that’s more like an impact investor who is trying to make money off the backs of the poorer.”

“I understand that I’m in the minority here,” she continued. “This is the way [microfinance is] moving. So clearly people feel it’s something that’s acceptable to them, or a good way to invest their money. I just don’t feel like it’s acceptable to me.”  

Mara Kardas-Nelson is the author of a forthcoming book on the history of microfinance, We Are Not Able to Live in the Sky (Holt, 2024).

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Recently, I took myself to one of my favorite places in New York City, the public library, to look at some of the hundreds of original letters, writings, and musings of Charles Darwin. The famous English scientist loved to write, and his curiosity and skill at observation come alive on the pages. 

In addition to proposing the theory of evolution, Darwin studied the expressions and emotions of people and animals. He debated in his writing just how scientific, universal, and predictable emotions actually are, and he sketched characters with exaggerated expressions, which the library had on display.

The subject rang a bell for me. 

Lately, as everyone has been up in arms about ChatGPT, AI general intelligence, and the prospect of robots taking people’s jobs, I’ve noticed that regulators have been ramping up warnings against AI and emotion recognition.

Emotion recognition, in this far-from-Darwin context, is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

The idea isn’t super complicated: the AI model may see an open mouth, squinted eyes, and contracted cheeks with a thrown-back head, for instance, and register it as a laugh, concluding that the subject is happy. 

But in practice, this is incredibly complex—and, some argue, a dangerous and invasive example of the sort of pseudoscience that artificial intelligence often produces. 

Certain privacy and human rights advocates, such as European Digital Rights and Access Now, are calling for a blanket ban on emotion recognition. And while the version of the EU AI Act that was approved by the European Parliament in June isn’t a total ban, it bars the use of emotion recognition in policing, border management, workplaces, and schools. 

Meanwhile, some US legislators have called out this particular field, and it appears to be a likely contender in any eventual AI regulation; Senator Ron Wyden, who is one of the lawmakers leading the regulatory push, recently praised the EU for tackling it and warned, “Your facial expressions, eye movements, tone of voice, and the way you walk are terrible ways to judge who you are or what you’ll do in the future. Yet millions and millions of dollars are being funneled into developing emotion-detection AI based on bunk science.”

But why is this a top concern? How well founded are fears about emotion recognition—and could strict regulation here actually hurt positive innovation? 

A handful of companies are already selling this technology for a wide variety of uses, though it’s not yet widely deployed. Affectiva, for one, has been exploring how AI that analyzes people’s facial expressions might be used to determine whether a car driver is tired and to evaluate how people are reacting to a movie trailer. Others, like HireVue, have sold emotion recognition as a way to screen for the most promising job candidates (a practice that has been met with heavy criticism; you can listen to our investigative audio series on the company here).

“I’m generally in favor of allowing the private sector to develop this technology. There are important applications, such as enabling people who are blind or have low vision to better understand the emotions of people around them,” Daniel Castro, vice president of the Information Technology and Innovation Foundation, a DC-based think tank, told me in an email.

But other applications of the tech are more alarming. Several companies are selling software to law enforcement that tries to ascertain if someone is lying or that can flag supposedly suspicious behavior. 

A pilot project called iBorderCtrl, sponsored by the European Union, offers a version of emotion recognition as part of its technology stack that manages border crossings. According to its website, the Automatic Deception Detection System “quantifies the probability of deceit in interviews by analyzing interviewees’ non-verbal micro-gestures” (though it acknowledges “scientific controversy around its efficacy”).

But the most high-profile use (or abuse, in this case) of emotion recognition tech is from China, and this is undoubtedly on legislators’ radars. 

The country has repeatedly used emotion AI for surveillance—notably to monitor Uyghurs in Xinjiang, according to a software engineer who claimed to have installed the systems in police stations. Emotion recognition was intended to identify a nervous or anxious “state of mind,” like a lie detector. As one human rights advocate warned the BBC, “It’s people who are in highly coercive circumstances, under enormous pressure, being understandably nervous, and that’s taken as an indication of guilt.” Some schools in the country have also used the tech on students to measure comprehension and performance.

Ella Jakubowska, a senior policy advisor at the Brussels-based organization European Digital Rights, tells me she has yet to hear of “any credible use case” for emotion recognition: “Both [facial recognition and emotion recognition] are about social control; about who watches and who gets watched; about where we see a concentration of power.” 

What’s more, there’s evidence that emotion recognition models just can’t be accurate. Emotions are complicated, and even human beings are often quite poor at identifying them in others. Even as the technology has improved in recent years, thanks to the availability of more and better data as well as increased computing power, the accuracy varies widely depending on what outcomes the system is aiming for and how good the data is going into it. 

“The technology is not perfect, although that probably has less to do with the limits of computer vision and more to do with the fact that human emotions are complex, vary based on culture and context, and are imprecise,” Castro told me. 

three babies crying from a series of old heliotypes by Rejlander. Rectangular boxes like those used to train AI are over their faces.
A composite of heliotypes taken by Oscar Gustave Rejlander, a photographer who worked with Darwin to capture human expression.
STEPHANIE ARNETT/MITTR | REJLANDER/GETTY MUSEUM

Which brings me back to Darwin. A fundamental tension in this field is whether science can ever determine emotions. We might see advances in affective computing as the underlying science of emotion continues to progress—or we might not. 

It’s a bit of a parable for this broader moment in AI. The technology is in a period of extreme hype, and the idea that artificial intelligence can make the world significantly more knowable and predictable can be appealing. That said, as AI expert Meredith Broussard has asked, can everything be distilled into a math problem? 

What else I’m reading

  • Political bias is seeping into AI language models, according to new research that my colleague Melissa Heikkilä reported on this week. Some models are more right-leaning and others are more left-leaning, and a truly unbiased model might be out of reach, some researchers say. 
  • Steven Lee Myers of the New York Times has a fascinating long read about how Sweden is thwarting targeted online information ops by the Kremlin, which are intended to sow division within the Scandinavian country as it works to join NATO. 
  • Kate Lindsay wrote a lovely reflection in the Atlantic about the changing nature of death in the digital age. Emails, texts, and social media posts live on long past our loved ones, changing grief and memory. (If you’re curious about this topic, a few months back I wrote about how this shift relates to changes in deletion policies by Google and Twitter.)

What I learned this week

A new study from researchers in Switzerland finds that news is highly valuable to Google Search and accounts for the majority of its revenue. The findings offer some optimism about the economics of news and publishing, especially if you, like me, care deeply about the future of journalism. Courtney Radsch wrote about the study in one of my favorite publications, Tech Policy Press. (On a related note, you should also read this sharp piece on how to fix local news from Steven Waldman in the Atlantic.)

China is escalating its war on kids’ screen time

This story first appeared in China Report, MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday.

Two years ago, parents around the world likely looked at China with a bit of jealousy: the country had instituted a strict three-hour-per-week limit for children playing video games. In the time since, it’s also demanded that TikTok-like social media platforms curate a heavily filtered content pool for users under 18, while also limiting their screen time and spending in the apps. 

For better or worse, these moves have put China ahead of just about every other country in terms of controlling how minors use the internet.

But Beijing is now going even bigger: last week, the government escalated its current regime into a comprehensive set of restrictions and regulations on how children use all apps, with the goal of limiting them to age-appropriate content on their phones, smart watches, speakers, and more.

On August 2, China’s cyberspace administrator released the “Guidelines for the Establishment of Minors’ Modes for the Mobile Internet.” Essentially, this is a cross-platform, cross-device, government-led parental control system that has been painstakingly planned out by Beijing. Whereas past rules mainly required cooperation from app companies, the government is now asking three sides—app developers, app store providers, and makers of smartphones and other smart devices—to coordinate with each other on a comprehensive “minors’ mode.” This would apply to Chinese companies, though non-Chinese tech giants like Apple and Samsung would be asked to cooperate with the system too. 

The rules are incredibly specific: kids under eight, for instance, can only use smart devices for 40 minutes every day and only consume content about “elementary education, hobbies and interests, and liberal arts education”; when they turn eight, they graduate to 60 minutes of screen time and “entertainment content with positive guidance.” Honestly, this newsletter would have to go on forever to explain all the specifics.

I think part of the reason the guidelines are so detailed—prescribing exactly the products that tech companies need to build for underage users—is that the government wants to increase enforcement and eliminate any loopholes, like those it’s seen children exploit in regulations on gaming and social media use. 

To take a step back, those rules have generally been pretty effective. A year after the three-hour-per-week gaming rules were instituted, 77% of young gamers had reduced the amount of time spent gaming each week, according to a 2022 survey conducted by Niko Partners, a research firm focusing on the Asian games market. Tencent’s earnings in the first quarter of 2023 also show “a dramatic 96% decrease in gaming hours and 90% decrease in gaming spending” by underage gamers from three years ago, Xiaofeng Zeng, a vice president at Niko Partners, tells me.

But when there are rules, there will always be workarounds. Of the gamers surveyed in 2022, 29% still reported weekly game time over three hours, mostly by using their adult relatives’ accounts. While some companies, like Tencent and NetEase, have started to use facial recognition to verify the actual player, most game developers don’t have the capability to do that yet. Underage gamers are also fueling the growth of gaming account rental platforms, which have less incentive or technological know-how to filter out underage users. 

So now Beijing is moving toward a standardized technical system that allows institutions—whether the government or private tech companies—to have almost total, end-to-end control over individual young users in areas far beyond gaming. Many parents, both in and outside China, have celebrated Beijing’s past parental controls as the right approach for a government to take. But will all those people be comfortable with the government’s ever-intensifying restrictions?

(One important caveat I should note: Jeremy Daum, a senior fellow at the Yale Law School Paul Tsai China Center, points out that the rules may not, at least at first, be binding; for example, the regulation has not laid out the liability for companies that fail to comply.)

I’m curious to see how legislators in the United States will respond, since some are trying to introduce similar rules. 

My colleague Tate Ryan-Mosley has written about the recent wave of child safety bills being proposed across the US. One of the major obstacles for these rules is that they are hard to enforce technically. In some ways, China’s detailed planning for “minors’ mode” could be instructive for other governments interested in translating child safety concerns into the language of app development and regulation. (Of course, I doubt any American legislator would publicly endorse a piece of Chinese regulation.) 

But with increased control come even more concerns about personal data (a point I also made back in March in a piece about limits on TikTok use). As Tate asked in The Technocrat, her newsletter on tech policy, in April: “[A]ll this legislation depends on verifying the ages of users online, which is hugely difficult and presents new privacy risks. Do we really want to provide driver’s license information to Meta, for example?”

Beijing has an easier time answering that question. The government has already built a comprehensive national identity verification system that the gaming and social media companies are using to discover underage users’ accounts. It is also more comfortable and adamant about deciding what content (politics, LGBTQ issues, uncensored news, etc.) is not for children. (The US is catching up on that.)

In the end, it’s the same technical system that protects children from harm, censors online speech, and collects vast amounts of personal data. It’s the same paternalistic attitude that determines what children should watch and what adults should read. How comfortable are we in pushing the balance further to the side of centralized control rather than individual decision-making?

If you are a parent, how do you feel about China’s new and old rules restricting minors’ internet use? I want to hear from you. Write to me at zeyi@technologyreview.com.

Catch up with China

1. Speaking of app stores, Apple just removed more than 100 generative AI apps from its Chinese app store because they violated the country’s new generative AI regulation. (Gizmodo)

  • The law, passed in July, is wholly focused on generative AI, continuing the Chinese government’s whack-a-mole tradition when it comes to taming new tech phenomena. (MIT Technology Review)

2. China has spent billions of dollars in recent years to build “cities like sponges,” but severe flooding this summer, which has affected 30 million people and caused 20 deaths, shows it’s not enough. (Bloomberg $)

3. TikTok could soon obtain a payment service license in Indonesia, which would boost its e-commerce ambitions. (Reuters $)

4. Neville Roy Singham, an American tech mogul, is at the center of a global web of donations that pushes pro-China talking points within progressive groups. (New York Times $)

5. How Li Ziqi, the original Chinese cottagecore creator on YouTube, rose to fame and then quietly disappeared. (New Yorker $)

6. A new report found that the solar panel industry—with its close ties to China’s Xinjiang region, where forced labor has been documented—has become less transparent about the origin of its products. (New York Times $)

7. As China’s economic growth slows, more wealthy Chinese people are turning to a US program that offers permanent residency in exchange for business investments. (Wall Street Journal $)

8. A batch of online matchmaking apps in China have been created for a new demographic: parents who want their children to marry as soon as possible. (Rest of World)

Lost in translation

The extreme summer heat of 2023 has made it a great year for Chinese air-conditioner manufacturers. According to the Chinese financial publication Yicai, the El Niño phenomenon caused temperatures to reach new heights starting in June, pushing consumers to splurge on AC purchases early this year. 

Domestic sales of AC units in the first half of 2023 increased about 40% over last year. The CEO of a Chinese home appliance company told the publication that this is the only large appliance to see an increase in sales this year. 

Meanwhile, the global demand for AC also keeps increasing (even though cooling systems are a double-edged sword when it comes to climate change). In June, China’s AC exports rose 12.2%. As the largest AC exporter in the world, the country already has an annual production capacity of 255 million units, and that’s set to increase by another 20 million this year.

One more thing

What’s the trendiest pet on Chinese social media these days? Mango pits. As the South China Morning Post reported, some people are washing, brushing, drying, and applying aloe vera gel to mango pits to make them look like animals, the seed fiber resembling fluffy hair. There are even mango pit pet influencers on social media now! I love mangoes, but I think this is going way too far.

Two photos of mango pit pets, one in yellow and one in pink.

SOUTH CHINA MORNING POST
Worldcoin just officially launched. Here’s why it’s already being investigated.

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

It’s possible you’ve heard the name Worldcoin recently. It’s been getting a ton of attention—some good, some … not so good. 

It’s a project that claims to use cryptocurrency to distribute money across the world, though its bigger ambition is to create a global identity system called “World ID” that relies on individuals’ unique biometric data to prove that they are humans. It officially launched on July 24 in more than 20 countries, and Sam Altman, the CEO of OpenAI and one of the biggest tech celebrities right now, is one of the cofounders of the project.

The company makes big, idealistic promises: that it can deliver a form of universal basic income through technology to make the world a better and more equitable place, while offering a way to verify your humanity in a digital future filled with nonhuman intelligence, which it calls “proof of personhood.” If you’re thinking this sounds like a potential privacy nightmare, you’re not alone

Luckily, we have someone I’d consider the Worldcoin expert on staff here at MIT Technology Review. Last year investigative reporter Eileen Guo, with freelancer Adi Renaldi, dug into the company and found that Worldcoin’s operations were far from living up to its lofty goals and that it was collecting sensitive biometric data from many vulnerable people in exchange for cash.

As they wrote: 

“Our investigation revealed wide gaps between Worldcoin’s public messaging, which focused on protecting privacy, and what users experienced. We found that the company’s representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent.” 

What’s more, the company was using test users’ sensitive, but anonymized, data to train artificial intelligence models, but Eileen and Adi found that individuals did not know their data was being used that way. 

I highly recommend you read their investigation—which builds on more than 35 interviews with Worldcoin executives, contractors, and test users recruited primarily in developing countries—to better understand how the company was handling sensitive personal data and how its idealistic rhetoric compared with the realities on the ground. 

Given their reporting, it’s no surprise that regulators in at least four countries have already launched investigations into the project, citing concerns with its privacy practices. The company claims it has already scanned nearly 2.2 million “unique humans” into its database, which was primarily built during an extended test period over the last two years. 

So I asked Eileen: What really has changed since her investigation? How do we make sense of the latest news?

Since her story, Worldcoin CEO Alex Blania has told other outlets that the company has changed many of its data collection and privacy practices, though there are reasons to be skeptical. The company hasn’t specified exactly how it’s done this, beyond saying it has stopped some of the most exploitative and deceptive recruitment tactics.

In emails Eileen recently exchanged with Worldcoin, a spokesperson was vague about how the company was handling personal data, saying that “the Worldcoin Foundation complies with all laws and regulations governing the processing of personal data in the markets where Worldcoin is available, including the General Data Protection Regulation (‘GDPR’) … The project will continue to cooperate with governing bodies on requests for more information about its privacy and data protection practices.” 

The spokesperson added, “It is important to stress that The Worldcoin Foundation and its contributor Tools for Humanity never have and never will sell users’ personal data.” 

But, Eileen notes, we (again) have nothing but the company’s word that this is true. That’s one reason we should keep a close eye on what government investigators start to uncover about Worldcoin. 

The legality of Worldcoin’s biometric data collection is at the heart of an investigation the French government launched into Worldcoin and a probe by a German data protection agency, which has been investigating Worldcoin since November of last year, according to Reuters. On July 25, the Information Commissioner’s Officer in the UK put out a statement that it will be “making enquiries” into the company. Then on August 2, Kenya’s Office of Data Protection suspended the project in the country, saying it will investigate whether Worldcoin is in compliance with the country’s Data Protection Act. 

Importantly, a core objective of the Worldcoin project is to perfect its “proof of personhood” methodology, which requires a lot of data to train AI models. If its proof-of-personhood system becomes widely adopted, this could be quite lucrative for its investors, particularly during an AI gold rush like the one we’re seeing now. 

The company announced this week that it will allow other companies and governments to deploy its identity system.

“Worldcoin’s proposed identity solution is problematic whether or not other companies and governments use it. Of course, it would be worse if it were used more broadly without so many key questions being answered,” says Eileen. “But I think at this stage, it’s clever marketing to try to convince everyone to get scanned and sign up so that they can achieve the ‘fastest’ and ‘biggest onboarding into crypto and Web3’ to date, as Blania told me last year.”

Eileen points out that Worldcoin has also not yet clarified whether it still uses the biometric data it collects to train its artificial intelligence models, or whether it has deleted the biometric data it already collected from test users and was using in training, as it told MIT Technology Review it would do before launch. 

“I haven’t seen anything that suggests that they’ve actually stopped training their algorithms—or that they ever would,” Eileen says. “I mean, that’s the point of AI, right? that it’s supposed to get smarter.”

What else I’m reading

  • Meta’s oversight board, which issues independently drafted and binding policies, is reviewing how the company is handling misinformation about abortion. Currently, the company’s moderation decisions are a bit of a mess, according to this nice explainer-y piece in Slate. We should expect the board to issue new abortion-information-specific policies in the coming weeks. 
  • At the end of July, Twitter rebranded to X, in a strange, unsurprising-yet-surprising move by its new czar Elon. I loved Casey Newton’s obituary-style take, in which he argues that Musk’s $44 billion investment was really just a wasteful act of “cultural vandalism.” 
  • Nobel-winning economist Joseph Stiglitz is worried that AI will worsen inequality, and he spoke with Scientific American about how we might get off the path we seem to currently be on. Well worth a read! 

What I learned this week

Bots on social media are likely being supercharged by ChatGPT. Researchers from Indiana University have released a preprint paper that shows a Twitter botnet of over 1,000 accounts, which the researchers call fox8, “that appears to employ ChatGPT to generate human-like content.” The botnet promoted fake-news websites and stolen images, and it’s an alarming preview of a social media environment fueled by AI and machine-generated misinformation. Tech Policy Press wrote a great quick analysis on the findings, which I’d recommend checking out.

Additional reporting from Eileen Guo.

The race to find a better way to label AI

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

I recently wrote a short story about a project backed by some major tech and media companies trying to help identify content made or altered by AI. 

With the boom of AI-generated text, images, and videos, both lawmakers and average internet users have been calling for more transparency. Though it might seem like a very reasonable ask to simply add a label (which it is), it is not actually an easy one, and the existing solutions, like AI-powered detection and watermarking, have some serious pitfalls. 

As my colleague Melissa Heikkilä has written, most of the current technical solutions “don’t stand a chance against the latest generation of AI language models.” Nevertheless, the race to label and detect AI-generated content is on.

That’s where this protocol comes in. Started in 2021, C2PA (named for the group that created it, the Coalition for Content Provenance and Authenticity) is a set of new technical standards and freely available code that securely labels content with information clarifying where it came from.

This means that an image, for example, is marked with information by the device it originated from (like a phone camera), by any editing tools (such as Photoshop), and ultimately by the social media platform that it gets uploaded to. Over time, this information creates a sort of history, all of which is logged.

The tech itself—and the ways in which C2PA is more secure than other AI-labeling alternatives—is pretty cool, though a bit complicated. I get more into it in my piece, but it’s perhaps easiest to think about it like a nutrition label (which is the preferred analogy of most people I spoke with). You can see an example of a deepfake video here with the label created by Truepic, a founding C2PA member, with Revel AI.

“The idea of provenance is marking the content in an interoperable and tamper-evident way so it can travel through the internet with that transparency, with that nutrition label,” says Mounir Ibrahim, the vice president of public affairs at Truepic. 

When it first launched, C2PA was backed by a handful of prominent companies, including Adobe and Microsoft, but over the past six months, its membership has increased 56%. Just this week, the major media platform Shutterstock announced that it would use C2PA to label all of its AI-generated media.  

It’s based on an opt-in approach, so groups that want to verify and disclose where content came from, like a newspaper or an advertiser, will choose to add the credentials to a piece of media. 

One of the project’s leads, Andy Parsons, who works for Adobe, attributes the new interest in and urgency around C2PA to the proliferation of generative AI and the expectation of legislation, both in the US and the EU, that will mandate new levels of transparency.

The vision is grand—people involved admitted to me that real success here depends on widespread, if not universal, adoption. They said they hope all major content companies adopt the standard. 

For that, Ibrahim says, usability is key: “You wanna make sure no matter where it goes on the internet, it’ll be read and ingested in the same way, much like SSL encryption. That’s how you scale a more transparent ecosystem online.”

This could be a critical development as we enter the US election season, when all eyes will be watching for AI-generated misinformation. Researchers on the project say they are racing to release new functionality and court more social media platforms before the expected onslaught. 

Currently, C2PA works primarily on images and video, though members say that they are working on ways to handle text-based content. I get into some of the other shortcomings of the protocol in the piece, but what’s really important to understand is that even when the use of AI is disclosed, it might not stem the harm of machine-generated misinformation. Social media platforms will still need to decide whether to keep that information on their sites, and users will have to decide for themselves whether to trust and share the content. 

It’s a bit reminiscent of initiatives by tech platforms over the past several years to label misinformation. Facebook labeled over 180 million posts as misinformation ahead of the 2020 election, and clearly there were still considerable issues. And though C2PA does not intend to assign indicators of accuracy to the posts, it’s clear that just providing more information about content can’t necessarily save us from ourselves. 

What I am reading this week

  • We published a handy roadmap that outlines how AI might impact domestic politics, and what milestones to watch for. It’s fascinating to think about AI submitting or contributing to a public testimony, for example. 
  • Vittoria Elliott wrote a very timely story about how watermarking, which is also meant to bring transparency to AI-generated content, is not sufficient in managing the threat of disinformation. She explains that experts say the White House needs to do more than just push voluntary agreements on AI.
    • And here’s another story I thought was interesting on the race to develop better watermarking tech
  • Speaking of AI … our AI reporter Melissa also wrote about a new tool developed by MIT researchers that can help prevent photos from being manipulated by AI. It might help prevent problems like AI-generated porn that uses real photos from unconsenting women. 
  • TikTok is dipping its toe further into e-commerce. New features on the app allow users to purchase products directly from influencers, leading some to complain about a feed that feels like a flood of sponsored content. It’s a mildly alarming development in the influencer economy and highlights the selling power of social media platforms.

What I learned this week

Researchers are still trying to sort out just how social media platforms, and their algorithms, affect our political beliefs and civic discourse. This week, four new studies about the impact of Facebook and Instagram on users’ politics during the 2020 election showed that the effects are quite complicated. The studies, published by University of Texas, New York University, Princeton, and other institutions, found that while the news people read on the platforms showed a high degree of segregation by political views, removing reshared content from feeds on Facebook did not change political beliefs. 

The size of the studies is making them sort of a big deal in the academic world this week, but the research is getting some scrutiny for its close collaboration with Meta.

Six ways that AI could change politics

ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance. 

But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy— grow for decades.

Threats of this sort seem urgent and disturbing because they’re salient. We know what to look for, and we can easily imagine their effects.

The truth is, the future will be much more interesting. And even some of the most stupendous potential impacts of AI on politics won’t be all bad. We can draw some fairly straight lines between the current capabilities of AI tools and real-world outcomes that, by the standards of current public understanding, seem truly startling.

With this in mind, we propose six milestones that will herald a new era of democratic politics driven by AI. All feel achievable—perhaps not with today’s technology and levels of AI adoption, but very possibly in the near future.

What makes for a political AI milestone?

Good benchmarks should be meaningful, representing significant outcomes that come with real-world consequences. They should be plausible; they must be realistically achievable in the foreseeable future. And they should be observable—we should be able to recognize when they’ve been achieved.

Worries about AI swaying an election will very likely fail the observability test. While the risks of election manipulation through the robotic promotion of a candidate’s or party’s interests is a legitimate threat, elections are massively complex. Just as the debate continues to rage over why and how Donald Trump won the presidency in 2016, we’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.

Thinking further into the future: Could an AI candidate ever be elected to office? In the world of speculative fiction, from The Twilight Zone to Black Mirror, there is growing interest in the possibility of an AI or technologically assisted, otherwise-not-traditionally-eligible candidate winning an election. In an era where deepfaked videos can misrepresent the views and actions of human candidates and human politicians can choose to be represented by AI avatars or even robots, it is certainly possible for an AI candidate to mimic the media presence of a politician. Virtual politicians have received votes in national elections, for example in Russia in 2017. But this doesn’t pass the plausibility test. The voting public and legal establishment are likely to accept more and more automation and assistance supported by AI, but the age of non-human elected officials is far off.

The next political milestones for AI

Let’s start with some milestones that are already on the cusp of reality. These are achievements that seem well within the technical scope of existing AI technologies and for which the groundwork has already been laid.

Milestone #1: The acceptance by a legislature or agency of a testimony or comment generated by, and submitted under the name of, an AI.

Arguably, we’ve already seen legislation drafted by AI, albeit under the direction of human users and introduced by human legislators. After some early examples of bills written by AIs were introduced in Massachusetts and the US House of Representatives, many major legislative bodies have had their “first bill written by AI,” “used ChatGPT to generate committee remarks,” or “first floor speech written by AI” events.

Many of these bills and speeches are more stunt than serious, and they have received more criticism than consideration. They are short, have trivial levels of policy substance, or were heavily edited or guided by human legislators (through highly specific prompts to large language model–based AI tools like ChatGPT).

The interesting milestone along these lines will be the acceptance of testimony on legislation, or a comment submitted to an agency, drafted entirely by AI. To be sure, a large fraction of all writing going forward will be assisted by—and will truly benefit from—AI assistive technologies. So to avoid making this milestone trivial, we have to add the second clause: “submitted under the name of the AI.”

What would make this benchmark significant is the submission under the AI’s own name; that is, the acceptance by a governing body of the AI as proffering a legitimate perspective in public debate. Regardless of the public fervor over AI, this one won’t take long. The New York Times has published a letter under the name of ChatGPT (responding to an opinion piece we wrote), and legislators are already turning to AI to write high-profile opening remarks at committee hearings.

Milestone #2: The adoption of the first novel legislative amendment to a bill written by AI.

Moving beyond testimony, there is an immediate pathway for AI-generated policies to become law: microlegislation. This involves making tweaks to existing laws or bills that are tuned to serve some particular interest. It is a natural starting point for AI because it’s tightly scoped, involving small changes guided by a clear directive associated with a well-defined purpose.

By design, microlegislation is often implemented surreptitiously. It may even be filed anonymously within a deluge of other amendments to obscure its intended beneficiary. For that reason, microlegislation can often be bad for society, and it is ripe for exploitation by generative AI that would otherwise be subject to heavy scrutiny from a polity on guard for risks posed by AI.

Milestone #3: AI-generated political messaging outscores campaign consultant recommendations in poll testing.

Some of the most important near-term implications of AI for politics will happen largely behind closed doors. Like everyone else, political campaigners and pollsters will turn to AI to help with their jobs. We’re already seeing campaigners turn to AI-generated images to manufacture social content and pollsters simulate results using AI-generated respondents.

The next step in this evolution is political messaging developed by AI. A mainstay of the campaigner’s toolbox today is the message testing survey, where a few alternate formulations of a position are written down and tested with audiences to see which will generate more attention and a more positive response. Just as an experienced political pollster can anticipate effective messaging strategies pretty well based on observations from past campaigns and their impression of the state of the public debate, so can an AI trained on reams of public discourse, campaign rhetoric, and political reporting.

More futuristic achievements of AI as democratic actors

With these near-term milestones firmly in sight, let’s look further to some truly revolutionary possibilities. While these concepts may have seemed absurd just a year ago, they are increasingly conceivable with either current or near-future technologies.

Milestone #4: AI creates a political party with its own platform, attracting human candidates who win elections.

While an AI is unlikely to be allowed to run for and hold office, it is plausible that one may be able to found a political party. An AI could generate a political platform calculated to attract the interest of some cross-section of the public and, acting independently or through a human intermediary (hired help, like a political consultant or legal firm), could register formally as a political party. It could collect signatures to win a place on ballots and attract human candidates to run for office under its banner.

A big step in this direction has already been taken, via the campaign of the Danish Synthetic Party in 2022. An artist collective in Denmark created an AI chatbot to interact with human members of its community on Discord, exploring political ideology in conversation with them and on the basis of an analysis of historical party platforms in the country. All this happened with earlier generations of general purpose AI, not current systems like ChatGPT. However, the party failed to receive enough signatures to earn a spot on the ballot, and therefore did not win parliamentary representation.

Future AI-led efforts may succeed. One could imagine a generative AI with skills at the level of or beyond today’s leading technologies could formulate a set of policy positions targeted to build support among people of a specific demographic, or even an effective consensus platform capable of attracting broad-based support. Particularly in a European-style multiparty system, we can imagine a new party with a strong news hook—an AI at its core—winning attention and votes.

Milestone #5: AI autonomously generates profit and makes political campaign contributions.

Let’s turn next to the essential capability of modern politics: fundraising. “An entity capable of directing contributions to a campaign fund” might be a realpolitik definition of a political actor, and AI is potentially capable of this.

Like a human, an AI could conceivably generate contributions to a political campaign in a variety of ways. It could take a seed investment from a human controlling the AI and invest it to yield a return. It could start a business that generates revenue. There is growing interest and experimentation in auto-hustling: AI agents that set about autonomously growing businesses or otherwise generating profit. While ChatGPT-generated businesses may not yet have taken the world by storm, this possibility is in the same spirit as the algorithmic agents powering modern high-speed trading and so-called autonomous finance capabilities that are already helping to automate business and financial decisions.

Or, like most political entrepreneurs, AI could generate political messaging to convince humans to spend their own money on a defined campaign or cause. The AI would likely need to have some humans in the loop, and register its activities to the government (in the US context, as officers of a 501(c)(4) or political action committee).

Milestone #6: AI achieves a coordinated policy outcome across multiple jurisdictions.

Lastly, we come to the most meaningful of impacts: achieving outcomes in public policy. Even if AI cannot—now or in the future—be said to have its own desires or preferences, it could be programmed by humans to have a goal, such as lowering taxes or relieving a market regulation.

An AI has many of the same tools humans use to achieve these ends. It may advocate, formulating messaging and promoting ideas through digital channels like social media posts and videos. It may lobby, directing ideas and influence to key policymakers, even writing legislation. It may spend; see milestone #5.

The “multiple jurisdictions” piece is key to this milestone. A single law passed may be reasonably attributed to myriad factors: a charismatic champion, a political movement, a change in circumstances. The influence of any one actor, such as an AI, will be more demonstrable if it is successful simultaneously in many different places. And the digital scalability of AI gives it a special advantage in achieving these kinds of coordinated outcomes.

Will we know when the future is here?

The greatest challenge to most of these milestones is their observability: will we know it when we see it? The first campaign consultant whose ideas lose out to an AI may not be eager to report that fact. Neither will the campaign. Regarding fundraising, it’s hard enough for us to track down the human actors who are responsible for the “dark money” contributions controlling much of modern political finance; will we know if a future dominant force in fundraising for political action committees is an AI?

We’re likely to observe some of these milestones indirectly. At some point, perhaps politicians’ dollars will start migrating en masse to AI-based campaign consultancies and, eventually, we may realize that political movements sweeping across states or countries have been AI-assisted.

While the progression of technology is often unsettling, we need not fear these milestones. A new political platform that wins public support is itself a neutral proposition; it may lead to good or bad policy outcomes. Likewise, a successful policy program may or may not be beneficial to one group of constituents or another.

We think the six milestones outlined here are among the most viable and meaningful upcoming interactions between AI and democracy, but they are hardly the only scenarios to consider. The point is that our AI-driven political future will involve far more than deepfaked campaign ads and manufactured letter-writing campaigns. We should all be thinking more creatively about what comes next and be vigilant in steering our politics toward the best possible ends, no matter their means.