What a massive thermal battery means for energy storage

Rondo Energy just turned on what it says is the world’s largest thermal battery, an energy storage system that can take in electricity and provide a consistent source of heat.

The company announced last week that its first full-scale system is operational, with 100 megawatt-hours of capacity. The thermal battery is powered by an off-grid solar array and will provide heat for enhanced oil recovery (more on this in a moment).

Thermal batteries could help clean up difficult-to-decarbonize sectors like manufacturing and heavy industrial processes like cement and steel production. With Rondo’s latest announcement, the industry has reached a major milestone in its effort to prove that thermal energy storage can work in the real world. Let’s dig into this announcement, what it means to have oil and gas involved, and what comes next.

The concept behind a thermal battery is overwhelmingly simple: Use electricity to heat up some cheap, sturdy material (like bricks) and keep it hot until you want to use that heat later, either directly in an industrial process or to produce electricity.

Rondo’s new system has been operating for 10 weeks and achieved all the relevant efficiency and reliability benchmarks, according to the company. The bricks reach temperatures over 1,000 °C (about 1,800 °F), and over 97% of the energy put into the system is returned as heat.

This is a big step from the 2 MWh pilot system that Rondo started up in 2023, and it’s the first of the mass-produced, full-size heat batteries that the company hopes to put in the hands of customers.

Thermal batteries could be a major tool in cutting emissions: 20% of total energy demand today is used to provide heat for industrial processes, and most of that is generated by burning fossil fuels. So this project’s success is significant for climate action.

There’s one major detail here, though, that dulls some of that promise: This battery is being used for enhanced oil recovery, a process where steam is injected down into wells to get stubborn oil out of the ground.

It can be  tricky for a climate technology to show its merit by helping harvest fossil fuels. Some critics argue that these sorts of techniques keep that polluting infrastructure running longer.

When I spoke to Rondo founder and chief innovation officer  John O’Donnell about the new system, he defended the choice to work with oil and gas.  

“We are decarbonizing the world as it is today,” O’Donnell says. To his mind, it’s better to help an oil and gas company use solar power for its operation than leave it to continue burning natural gas for heat. Between cheap solar, expensive natural gas, and policies in California, he adds, Rondo’s technology made sense for the customer.

Having a willing customer pay for a full-scale system has been crucial to Rondo’s effort to show that it can deliver its technology.

And the next units are on the way: Rondo is currently building three more full-scale units in Europe. The company will be able to bring them online cheaper and faster because of what it’s learned from the California project, O’Donnell says. 

The company has the capacity to build more batteries, and do it quickly. It currently makes batteries at its factory in Thailand, which has the capacity to make 2.4 gigawatt-hours’ worth of heat batteries today.

I’ve been following progress on thermal batteries for years, and this project obviously represents a big step forward. For all the promises of cheap, robust energy storage, there’s nothing like actually building a large-scale system and testing it in the field.

It’s definitely hard to get excited about enhanced oil recovery—we need to stop burning fossil fuels, and do it quickly, to avoid the worst impacts of climate change. But I see the argument that as long as oil and gas operations exist, there’s value in cleaning them up.

And as O’Donnell puts it, heat batteries can help: “This is a really dumb, practical thing that’s ready now.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

3 Things Stephanie Arnett is into right now

Dungeon Crawler Carl, by Matt Dinniman

This science fiction book series confronted me with existential questions like “Are we alone in the universe?” and “Do I actually like LitRPG??” (LitRPGwhich stands for “literary role-playing game”is a relatively new genre that merges the conventions of computer RPGs with those of science fiction and fantasy novels.) In the series, aliens destroy most of Earth, leaving the titular Carl and Princess Donut, his ex-girlfriend’s cat, to fight in a bloodthirsty game of survival with rules that are part reality TV and part video game dungeon crawl. I particularly recommend the audiobook, voiced by Jeff Hays, which makes the numerous characters easy to differentiate. 

Journaling, offline and open-source

For years I’ve tried to find a perfect system to keep track of all my random notes and weird little rabbit holes of inspiration. None of my paper journals or paid apps have been able to top how customizable and convenient the developer-­favorite notetaking app Obsidian is. Thanks to this app, I’ve been able to cancel subscription services I was using to track my reading habits, fitness goals, and journalingand I also use it to track tasks I do for work, like drafting this article. It’s open-source and files are stored on my device, so I don’t have to worry about whether I’m sharing my private thoughts with a company that might scrape them for AI.

Bird-watching with Merlin 

Sometimes I have to make a conscious effort to step away from my screens and get out in the world. The latest version of the birding app Merlin, from the Cornell Lab of Ornithology, helps ease the transition. I can “collect” and identify species via step-by-step questions, photos, ormy favoriteaudio that I record so that the app can analyze it to indicate which birds are singing in real time. Using the audio feature, I “captured” the red-eyed vireo flitting up in the tree canopy and backlit by the sun. Fantastic for my backyard feeder or while I’m out on the trail.

Dispatch: Partying at one of Africa’s largest AI gatherings

It’s late August in Rwanda’s capital, Kigali, and people are filling a large hall at one of Africa’s biggest gatherings of minds in AI and machine learning. The room is draped in white curtains, and a giant screen blinks with videos created with generative AI. A classic East African folk song by the Tanzanian singer Saida Karoli plays loudly on the speakers.

Friends greet each other as waiters serve arrowroot crisps and sugary mocktails. A man and a woman wearing leopard skins atop their clothes sip beer and chat; many women are in handwoven Ethiopian garb with red, yellow, and green embroidery. The crowd teems with life. “The best thing about the Indaba is always the parties,” computer scientist Nyalleng Moorosi tells me. Indaba means “gathering” in Zulu, and Deep Learning Indaba, where we’re meeting, is an annual AI conference where Africans present their research and technologies they’ve built.

Moorosi is a senior researcher at the Distributed AI Research Institute and has dropped in for the occasion from the mountain kingdom of Lesotho. Dressed in her signature “Mama Africa” headwrap, she makes her way through the crowded hall.

Moments later, a cheerful set of Nigerian music begins to play over the speakers. Spontaneously, people pop up and gather around the stage, waving flags of many African nations. Moorosi laughs as she watches. “The vibe at the Indaba—the community spirit—is really strong,” she says, clapping.

Moorosi is one of the founding members of the Deep Learning Indaba, which began in 2017 from a nucleus of 300 people gathered in Johannesburg, South Africa. Since then, the event has expanded into a prestigious pan-African movement with local chapters in 50 countries.

This year, nearly 3,000 people applied to join the Indaba; about 1,300 were accepted. They hail primarily from English-speaking African countries, but this year I noticed a new influx from Chad, Cameroon, the Democratic Republic of Congo, South Sudan, and Sudan. 

Moorosi tells me that the main “prize” for many attendees is to be hired by a tech company or accepted into a PhD program. Indeed, the organizations I’ve seen at the event include Microsoft Research’s AI for Good Lab, Google, the Mastercard Foundation, and the Mila–Quebec AI Institute. But she hopes to see more homegrown ventures create opportunities within Africa.

That evening, before the dinner, we’d both attended a panel on AI policy in Africa. Experts discussed AI governance and called for those developing national AI strategies to seek more community engagement. People raised their hands to ask how young Africans could access high-level discussions on AI policy, and whether Africa’s continental AI strategy was being shaped by outsiders. Later, in conversation, Moorosi told me she’d like to see more African priorities (such as African Union–backed labor protections, mineral rights, or safeguards against exploitation) reflected in such strategies. 

On the last day of the Indaba, I ask Moorosi about her dreams for the future of AI in Africa. “I dream of African industries adopting African-built AI products,” she says, after a long moment. “We really need to show our work to the world.” 

Abdullahi Tsanni is a science writer based in Senegal who specializes in narrative features. 

Job titles of the future: AI embryologist

Embryologists are the scientists behind the scenes of in vitro fertilization who oversee the development and selection of embryos, prepare them for transfer, and maintain the lab environment. They’ve been a critical part of IVF for decades, but their job has gotten a whole lot busier in recent years as demand for the fertility treatment skyrockets and clinics struggle to keep up. The United States is in fact facing a critical shortage of both embryologists and genetic counselors. 

Klaus Wiemer, a veteran embryologist and IVF lab director, believes artificial intelligence might help by predicting embryo health in real time and unlocking new avenues for productivity in the lab. 

Wiemer is the chief scientific officer and head of clinical affairs at Fairtility, a company that uses artificial intelligence to shed light on the viability of eggs and embryos before proceeding with IVF. The company’s algorithm, called CHLOE (for Cultivating Human Life through Optimal Embryos), has been trained on millions of embryo data points and outcomes and can quickly sift through a patient’s embryos to point the clinician to the ones with the highest potential for successful implantation. This, the company claims, will improve time to pregnancy and live births. While its effectiveness has been tested only retrospectively to date, CHLOE is the first and only FDA-approved AI tool for embryo assessment. 

Current challenge 

When a patient undergoes IVF, the goal is to make genetically normal embryos. Embryologists collect cells from each embryo and send them off for external genetic testing. The results of this biopsy can take up to two weeks, and the process can add thousands of dollars to the treatment cost. Moreover, passing the screen just means an embryo has the correct number of chromosomes. That number doesn’t necessarily reflect the overall health of the embryo. 

“An embryo has one singular function, and that is to divide,” says Wiemer. “There are millions of data points concerning embryo cell division, cell division characteristics, area and size of the inner cell mass, and the number of times the trophectoderm [the layer that contributes to the future placenta] contracts.”

The AI model allows for a group of embryos to be constantly measured against the optimal characteristics at each stage of development. “What CHLOE answers is: How well did that embryo develop? And does it have all the necessary components that are needed in order to make a healthy implantation?” says Wiemer. CHLOE produces an AI score reflecting all the analysis that’s been done within an embryo. 

In the near future, Wiemer says, reducing the percentage of abnormal embryos that IVF clinics transfer to patients should not require a biopsy: “Every embryology laboratory will be doing automatic assessments of embryo development.” 

A changing field

Wiemer, who started his career in animal science, says the difference between animal embryology and human embryology is the extent of paperwork. “Embryologists spend 40% of their time on non-embryology skills,” he adds. “AI will allow us to declutter the embryology field so we can get back to being true scientists.” This means spending more time studying the embryos, ensuring that they are developing normally, and using all that newfound information to get better at picking which embryos to transfer. 

“CHLOE is like having a virtual assistant in the lab to help with embryo selection, ensure conditions are optimal, and send out reports to patients and clinical staff,” he says. “Getting to study data and see what impacts embryo development is extremely rewarding, given that this capability was impossible a few years ago.” 

Amanda Smith is a freelance journalist and writer reporting on culture, society, human interest, and technology.

Inside the archives of the NASA Ames Research Center

At the southern tip of San Francisco Bay, surrounded by the tech giants Google, Apple, and Microsoft, sits the historic NASA Ames Research Center. Its rich history includes a grab bag of fascinating scientific research involving massive wind tunnels, experimental aircraft, supercomputing, astrobiology, and more.

Founded in 1939 as a West Coast lab for the National Advisory Committee for Aeronautics (NACA), NASA Ames was built to close the US gap with Germany in aeronautics research. Named for NACA founding member Joseph Sweetman Ames, the facility grew from a shack on Moffett Field into a sprawling compound with thousands of employees. A collection of 5,000 images from NASA Ames’s archives paints a vivid picture of bleeding-edge work at the heart of America’s technology hub. 

Wind tunnels

NASA AMES RESEARCH CENTER ARCHIVES

A key motivation for the new lab was the need for huge wind tunnels to jump-start America’s aeronautical research, which was far behind Germany’s. Smaller tunnels capable of speeds up to 300 miles per hour were built first, followed by a massive 40-by-80-foot tunnel for full-scale aircraft. Powered up in March 1941, these tunnels became vital after Pearl Harbor, helping scientists rapidly develop advanced aircraft.

Today, NASA Ames operates the world’s largest pressurized wind tunnel, with subsonic and transonic chambers for testing rockets, aircraft, and wind turbines.

Pioneer and Voyager 2

NASA AMES RESEARCH CENTER ARCHIVES

From 1965 to 1992, Ames managed the Pioneer missions, which explored the moon, Venus, Jupiter, and Saturn. It also contributed to Voyager 2, launched in 1977, which journeyed past four planets before entering interstellar space in 2018. Ames’s archive preserves our first glimpses of strange new worlds seen during these pioneering missions.

Odd aircraft

aircraft in flight

NASA AMES RESEARCH CENTER ARCHIVES

The skeleton of a hulking airship hangar, obsolete even before its completion, remains on NASA Ames’s campus.

Many odd-looking experimental aircraftsuch as vertical take-off and landing (VTOL) aircraft, jets, and rotorcrafthave been developed and tested at the facility over the years, and new designs continue to take shape there today.

Vintage illustrations

NASA AMES RESEARCH CENTER ARCHIVES

Awe-inspiring retro illustrations in the Ames archives depict surfaces of distant planets, NASA spacecraft descending into surreal alien landscapes, and fantastical renderings of future ring-shaped human habitats in space. The optimism and excitement of the ’70s and ’80s is evident. 

Bubble suits and early VR

person in an early VR suit

NASA AMES RESEARCH CENTER ARCHIVES

In the 1980s, NASA Ames researchers worked to develop next-generation space suits, such as the bulbous, hard-shelled AX-5 model. NASA Ames’s Human-Machine Interaction Group also did pioneering work in the 1980s with virtual reality and came up with some wild-­looking hardware. Long before today’s AR/VR boom, Ames researchers glimpsed the technology’s potentialwhich was limited only by computing power.

 Decades of federally funded research at Ames fueled breakthroughs in aviation, spaceflight, and supercomputingan enduring legacy now at risk as federal grants for science face deep cuts.

A version of this story appeared on Beau­tiful Public Data (beautifulpublicdata.com), a newsletter by Jon Keegan that curates visually interesting data sets collected by local, state, and federal government agencies.

AI could predict who will have a heart attack

For all the modern marvels of cardiology, we struggle to predict who will have a heart attack. Many people never get screened at all. Now, startups like Bunkerhill Health, Nanox.AI, and HeartLung Technologies are applying AI algorithms to screen millions of CT scans for early signs of heart disease. This technology could be a breakthrough for public health, applying an old tool to uncover patients whose high risk for a heart attack is hiding in plain sight. But it remains unproven at scale while raising thorny questions about implementation and even how we define disease. 

Last year, an estimated 20 million Americans had chest CT scans done, after an event like a car accident or to screen for lung cancer. Frequently, they show evidence of coronary artery calcium (CAC), a marker for heart attack risk, that is buried or not mentioned in a radiology report focusing on ruling out bony injuries, life-threatening internal trauma, or cancer.

Dedicated testing for CAC remains an underutilized method of predicting heart attack risk. Over decades, plaque in heart arteries moves through its own life cycle, hardening from lipid-rich residue into calcium. Heart attacks themselves typically occur when younger, lipid-rich plaque unpredictably ruptures, kicking off a clotting cascade of inflammation that ultimately blocks the heart’s blood supply. Calcified plaque is generally stable, but finding CAC suggests that younger, more rupture-prone plaque is likely present too. 

Coronary artery calcium can often be spotted on chest CTs, and its concentration can be subjectively described. Normally, quantifying a person’s CAC score involves obtaining a heart-specific CT scan. Algorithms that calculate CAC scores from routine chest CTs, however, could massively expand access to this metric. In practice, these algorithms could then be deployed to alert patients and their doctors about abnormally high scores, encouraging them to seek further care. Today, the footprint of the startups offering AI-derived CAC scores is not large, but it is growing quickly. As their use grows, these algorithms may identify high-risk patients who are traditionally missed or who are on the margins of care. 

Historically, CAC scans were believed to have marginal benefit and were marketed to the worried well. Even today, most insurers won’t cover them. Attitudes, though, may be shifting. More expert groups are endorsing CAC scores as a way to refine cardiovascular risk estimates and persuade skeptical patients to start taking statins. 

The promise of AI-derived CAC scores is part of a broader trend toward mining troves of medical data to spot otherwise undetected disease. But while it seems promising, the practice raises plenty of questions. For example, CAC scores ­haven’t proved useful as a blunt instrument for universal screening. A 2022 Danish study evaluating a population-based program, for example, showed no benefit in mortality rates for patients who had undergone CAC screening tests. If AI delivered this information automatically, would the calculus really shift? 

And with widespread adoption, abnormal CAC scores will become common. Who follows up on these findings? “Many health systems aren’t yet set up to act on incidental calcium findings at scale,” says Nishith Khandwala, the cofounder of Bunkerhill Health. Without a standard procedure for doing so, he says, “you risk creating more work than value.” 

There’s also the question of whether these AI-generated scores would actually improve patient care. For a symptomatic patient, a CAC score of zero may offer false reassurance. For the asymptomatic patient with a high CAC score, the next steps remain uncertain. Beyond statins, it isn’t clear if these patients would benefit from starting costly cholesterol-lowering drugs such as Repatha or other PCSK9-inhibitors. It may encourage some to pursue unnecessary but costly downstream procedures that could even end up doing harm. Currently, AI-derived CAC scoring is not reimbursed as a separate service by Medicare or most insurers. The business case for this technology today, effectively, lies in these potentially perverse incentives. 

At a fundamental level, this approach could actually change how we define disease. Adam Rodman, a hospitalist and AI expert at Beth Israel Deaconess Medical Center in Boston, has observed that AI-derived CAC scores share similarities with the “incidentaloma,” a term coined in the 1980s to describe unexpected findings on CT scans. In both cases, the normal pattern of diagnosis—in which doctors and patients deliberately embark on testing to figure out what’s causing a specific problem—were fundamentally disrupted. But, as Rodman notes, incidentalomas were still found by humans reviewing the scans. 

Now, he says, we are entering an era of “machine-based nosology,” where algorithms define diseases on their own terms. As machines make more diagnoses, they may catch things we miss. But Rodman and I began to wonder if a two-tiered diagnostic future may emerge, where “haves” pay for brand-name algorithms while “have-nots” settle for lesser alternatives. 

For patients who have no risk factors or are detached from regular medical care, an AI-derived CAC score could potentially catch problems earlier and rewrite the script. But how these scores reach people, what is done about them, and whether they can ultimately improve patient outcomes at scale remain open questions. For now—holding the pen as they toggle between patients and algorithmic outputs—clinicians still matter. 

Vishal Khetpal is a fellow in cardiovascular disease. The views expressed in this article do not represent those of his employers. 

Flowers of the future

Flowers play a key role in most landscapes, from urban to rural areas. There might be dandelions poking through the cracks in the pavement, wildflowers on the highway median, or poppies covering a hillside. We might notice the time of year they bloom and connect that to our changing climate. Perhaps we are familiar with their cycles: bud, bloom, wilt, seed. Yet flowers have much more to tell in their bright blooms: The very shape they take is formed by local and global climate conditions. 

The form of a flower is a visual display of its climate, if you know what to look for. In a dry year, its petals’ pigmentation may change. In a warm year, the flower might grow bigger. The flower’s ultraviolet-absorbing pigment increases with higher ozone levels. As the climate changes in the future, how might flowers change? 

white flower and a purple flower
Anthocyanins are red or indigo pigments that supply antioxidants and photoprotectants, which help a plant tolerate climate-related stresses such as droughts.
© 2021 SULLIVAN CN, KOSKI MH

An artistic research project called Plant Futures imagines how a single species of flower might evolve in response to climate change between 2023 and 2100—and invites us to reflect on the complex, long-term impacts of our warming world. The project has created one flower for every year from 2023 to 2100. The form of each one is data-driven, based on climate projections and research into how climate influences flowers’ visual attributes. 

two rows of flowers that are both yellow and purple
More ultraviolet pigment protects flowers’ pollen against increasing ozone levels.
MARCO TODESCO
a white flower with a yellow center
Under unpredictable weather conditions, the speculative flowers grow a second layer of petals. In botany, a second layer is called a “double bloom” and arises from random mutations.
COURTESY OF ANNELIE BERNER

Plant Futures began during an artist residency in Helsinki, where I worked closely with the biologist Aku Korhonen to understand how climate change affected the local ecosystem. While exploring the primeval Haltiala forest, I learned of the Circaea alpina, a tiny flower that was once rare in that area but has become more common as temperatures have risen in recent years. Yet its habitat is delicate: The plant requires shade and a moist environment, and the spruce population that provides those conditions is declining in the face of new forest pathogens. I wondered: What if the Circaea alpina could survive in spite of climate uncertainty? If the dark, shaded bogs turn into bright meadows and the wet ground dries out, how might the flower adapt in order to survive? This flower’s potential became the project’s grounding point. 

The author studying historical Circaea samples in the Luomus Botanical Collections.
COURTESY OF ANNELIE BERNER

Outside the forest, I worked with botanical experts in the Luomus Botanical Collections. I studied samples of Circaea flowers from as far back as 1906, and I researched historical climate conditions in an attempt to understand how flower size and color related to a year’s temperature and precipitation patterns. 

I researched how other flowering plants respond to changes to their climate conditions and wondered how the Circaea would need to adapt to thrive in a future world. If such changes happened, what would the Circaea look like in 2100? 

We designed the future flowers through a combination of data-driven algorithmic mapping and artistic control. I worked with the data artist Marcin Ignac from Variable Studio to create 3D flowers whose appearance was connected to climate data. Using Nodes.io, we made a 3D model of the Circaea alpina based on its current morphology and then mapped how those physical parameters might shift as the climate changes. For example, as the temperature rises and precipitation decreases in the data set, the petal color shifts toward red, reflecting how flowers protect themselves with an increase in anthocyanins. Changes in temperature, carbon dioxide levels, and precipitation rates combine to affect the flowers’ size, density of veins, UV pigments, color, and tendency toward double bloom.
2025: Circaea alpina is ever so slightly larger than usual owing to a warmer summer, but it is otherwise close to the typical Circaea flower in size, color, and other attributes.
2064: We see a bigger flower with more petals, given an increase in carbon dioxide levels and temperature. The bull’s-eye pattern, composed of UV pigment, is bigger and messier because of an increase in ozone and solar radiation. A second tier of petals reflects uncertainty in the climate model.
2074: The flower becomes pinker, an antioxidative response to the stress of consecutive dry days and higher temperatures. Its size increases, primarily because of higher levels of carbon dioxide. The double bloom of petals persists as the climate model’s projections increase in uncertainty.
2100: The flower’s veins are densely packed, which could signal appropriation of a technique leaves use to improve water transport during droughts. It could also be part of a strategy to attract pollinators in the face of worsening air quality that degrades the transmission of scents.
2023—2100: Each year, the speculative flower changes. Size, color, and form shift in accordance with the increased temperature and carbon dioxide levels and the changes in precipitation patterns.
In this 10-centimeter cube of plexiglass, the future flowers are “preserved,” allowing the viewer to see them in a comparative, layered view.
COURTESY OF ANNELIE BERNER

Based in Copenhagen, Annelie Berner is a designer, researcher, teacher, and artist specializing in data visualization.

This retina implant lets people with vision loss do a crossword puzzle

Science Corporation—a competitor to Neuralink founded by the former president of Elon Musk’s brain-interface venture—has leapfrogged its rival after acquiring, at a fire-sale price, a vision implant that’s in advanced testing,.

The implant produces a form of “artificial vision” that lets some patients read text and do crosswords, according to a report published in the New England Journal of Medicine today.

The implant is a microelectronic chip placed under the retina. Using signals from a camera mounted on a pair of glasses, the chip emits bursts of electricity in order to bypass photoreceptor cells damaged by macular degeneration, the leading cause of vision loss in elderly people.

“The magnitude of the effect is what’s notable,” says José-Alain Sahel, a University of Pittsburgh vision scientist who led testing of the system, which is called PRIMA. “There’s a patient in the UK and she is reading the pages of a regular book, which is unprecedented.”  

Until last year, the device was being developed by Pixium Vision, a French startup cofounded by Sahel, which faced bankruptcy after it couldn’t raise more cash.  

That’s when Science Corporation swept in to purchase the company’s assets for about €4 million ($4.7 million), according to court filings.

“Science was able to buy it for very cheap just when the study was coming out, so it was good timing for them,” says Sahel. “They could quickly access very advanced technology that’s closer to the market, which is good for a company to have.”

Science was founded in 2021 by Max Hodak, the first president of Neuralink, after his sudden departure from that company. Since its founding, Science has raised around $290 million, according to the venture capital database Pitchbook, and used the money to launch broad-ranging exploratory research on brain interfaces and new types of vision treatments.

“The ambition here is to build a big, standalone medical technology company that would fit in with an Apple, Samsung, or an Alphabet,” Hodak said in an interview at Science’s labs in Alameda, California in September. “The goal is to change the world in important ways … but we need to make money in order to invest in these programs.”

By acquiring the PRIMA implant program, Science effectively vaulted past years of development and testing. The company has requested approval to sell the eye chip in Europe and is in discussions with regulators in the US.

Unlike Neuralink’s implant, which records brain signals so paralyzed recipients can use their thoughts to move a computer mouse, the retina chip sends information into the brain to produce vision. Because the retina is an outgrowth of the brain, the chip qualifies as a type of brain-computer interface.

Artificial vision systems have been studied for years and one, called the Argus II, even reached the market and was installed in the eyes of about 400 people. But that product was later withdrawn after it proved to be a money-loser, according to Cortigent, the company that now owns that technology.

Thirty-eight patients in Europe received a PRIMA implant in one eye. On average, the study found, they were able to read five additional lines on a vision chart—the kind with rows of letters, each smaller than the last. Some of that improvement was due to what Sahel calls “various tricks” like using a zoom function, which allows patients to zero in on text they want to read.

The type of vision loss being treated with the new implant is called geographic atrophy, in which patients have peripheral vision but can’t make out objects directly in front of them, like words or faces. According to Prevent Blindness, an advocacy organization, this type of central vision loss affects around one in 10 people over 80.  

The implant was originally designed starting 20 years ago by Daniel Palanker, a laser expert and now a professor at Stanford University, who says his breakthrough was realizing that light beams could supply both energy and information to a chip placed under the retina. Other implants, like Argus II, use a wire, which adds complexity.

“The chip has no brains at all. It just turns light into electrical current that flows into the tissue,” says Palanker. “Patients describe the color they see as yellowish blue or sun color.”

The system works using a wearable camera that records a scene and then blasts bright infrared light into the eye, using a wavelength humans can’t see. That light hits the chip, which is covered by “what are basically tiny solar panels,” says Palanker. “We just try to replace the photoreceptors with a photo-array.”

A diagram of how a visual scene could be represented by a retinal implant.
COURTESY SCIENCE CORPORATION

The current system produces about 400 spots of vision, which lets users make out the outlines of words and objects. Palanaker says a next-generation device will have five times as many “pixels” and should let people see more: “What we discovered in the trial is that even though you stimulate individual pixels, patients perceive it as continuous. The patient says ‘I see a line,’ “I see a letter.’”

Palanker says it will be important to keep improving the system because “the market size depends on the quality of the vision produced.”

When Pixium teetered on insolvency, Palanker says, he helped search for a buyer, meeting with Hodak. “It was a fire sale, not a celebration,” he says. “But for me it’s a very lucky outcome, because it means the product is going forward. And the purchase price doesn’t really matter, because there’s a big investment needed to bring it to market. It’s going to cost money.”  

Photo of the PRIMA Glasses and Pocket Processor.
The PRIMA artificial vision system has a battery pack/controller and an eye-mounted camera.
COURTESY SCIENCE CORPORATION

During a visit to Science’s headquarters, Hodak described the company’s effort to redesign the system into something sleeker and more user-friendly. In the original design, in addition to the wearable camera, the patient has to carry around a bulky controller containing a battery and laser, as well as buttons to zoom in and out. 

But Science has already prototyped a version in which those electronics are squeezed into what look like an extra-large pair of sunglasses.

“The implant is great, but we’ll have new glasses on patients fairly shortly,” Hodak says. “This will substantially improve their ability to have it with them all day.” 

Other companies also want to treat blindness with brain-computer interfaces, but some think it might be better to send signals directly into the brain. This year, Neuralink has been touting plans for “Blindsight,” a project to send electrical signals directly into the brain’s visual cortex, bypassing the retina entirely. It has yet to test the approach in a person.

From slop to Sotheby’s? AI art enters a new phase

In this era of AI slop, the idea that generative AI tools like Midjourney and Runway could be used to make art can seem absurd: What possible artistic value is there to be found in the likes of Shrimp Jesus and Ballerina Cappuccina? But amid all the muck, there are people using AI tools with real consideration and intent. Some of them are finding notable success as AI artists: They are gaining huge online followings, selling their work at auction, and even having it exhibited in galleries and museums. 

“Sometimes you need a camera, sometimes AI, and sometimes paint or pencil or any other medium,” says Jacob Adler, a musician and composer who won the top prize at the generative video company Runway’s third annual AI Film Festival for his work Total Pixel Space. “It’s just one tool that is added to the creator’s toolbox.” 

One of the most conspicuous features of generative AI tools is their accessibility. With no training and in very little time, you can create an image of whatever you can imagine in whatever style you desire. That’s a key reason AI art has attracted so much criticism: It’s now trivially easy to clog sites like Instagram and TikTok with vapid nonsense, and companies can generate images and video themselves instead of hiring trained artists.

Henry Dauber created these visuals for a bitcoin NFT titled The Order of Satoshi, which sold at Sotheby’s for $24,000.
Henry Daubrez created these visuals for a bitcoin NFT titled The Order of Satoshi, which sold at Sotheby’s for $24,000.
COURTESY OF THE ARTIST

Henry Daubrez, an artist and designer who created the AI-generated visuals for a bitcoin NFT that sold for $24,000 at Sotheby’s and is now Google’s first filmmaker in residence, sees that accessibility as one of generative AI’s most positive attributes. People who had long since given up on creative expression, or who simply never had the time to master a medium, are now creating and sharing art, he says. 

But that doesn’t mean the first AI-generated masterpiece could come from just anyone. “I don’t think [generative AI] is going to create an entire generation of geniuses,” says Daubrez, who has described himself as an “AI-assisted artist.” Prompting tools like DALL-E and Midjourney might not require technical finesse, but getting those tools to create something interesting, and then evaluating whether the results are any good, takes both imagination and artistic sensibility, he says: “I think we’re getting into a new generation which is going to be driven by taste.” 

Kira Xonorika’s Trickster is the first piece to use generative AI in the Denver Art Museum’s permanent collection.
Kira Xonorika’s Trickster is the first piece to use generative AI in the Denver Art Museum’s permanent collection.
COURTESY OF THE ARTIST

Even for artists who do have experience with other media, AI can be more than just a shortcut. Beth Frey, a trained fine artist who shares her AI art on an Instagram account with over 100,000 followers, was drawn to early generative AI tools because of the uncanniness of their creations—she relished the deformed hands and haunting depictions of eating. Over time, the models’ errors have been ironed out, which is part of the reason she hasn’t posted an AI-generated piece on Instagram in over a year. “The better it gets, the less interesting it is for me,” she says. “You have to work harder to get the glitch now.”

ai-generated tomato head character vomits spaghetti onto its lap as it sits on a sofa
Beth Frey’s Instagram account @sentientmuppetfactory features uncanny AI creations.
COURTESY OF THE ARTIST

Making art with AI can require relinquishing control—to the companies that update the tools, and to the tools themselves. For Kira Xonorika, a self-described “AI-collaborative artist” whose short film Trickster is the first generative AI piece in the Denver Art Museum’s permanent collection, that lack of control is part of the appeal. “[What] I really like about AI is the element of unpredictability,” says Xonorika, whose work explores themes such as indigeneity and nonhuman intelligence. “If you’re open to that, it really enhances and expands ideas that you might have.”

But the idea of AI as a co-creator—or even simply as an artistic medium—is still a long way from widespread acceptance. To many people, “AI art” and “AI slop” remain synonymous. And so, as grateful as Daubrez is for the recognition he has received so far, he’s found that pioneering a new form of art in the face of such strong opposition is an emotional mixed bag. “As long as it’s not really accepted that AI is just a tool like any other tool and people will do whatever they want with it—and some of it might be great, some might not be—it’s still going to be sweet [and] sour,” he says.

This startup thinks slime mold can help us design better cities

It is a yellow blob with no brain, yet some researchers believe a curious organism known as slime mold could help us build more resilient cities.

Humans have been building cities for 6,000 years, but slime mold has been around for 600 million. The team behind a new startup called Mireta wants to translate the organism’s biological superpowers into algorithms that might help improve transit times, alleviate congestion, and minimize climate-related disruptions in cities worldwide.

Mireta’s algorithm mimics how slime mold efficiently distributes resources through branching networks. The startup’s founders think this approach could help connect subway stations, design bike lanes, or optimize factory assembly lines. They claim its software can factor in flood zones, traffic patterns, budget constraints, and more.

“It’s very rational to think that some [natural] systems or organisms have actually come up with clever solutions to problems we share,” says Raphael Kay, Mireta’s cofounder and head of design, who has a background in architecture and mechanical engineering and is currently a PhD candidate in materials science and mechanical engineering at Harvard University.

As urbanization continues—about 60% of the global population will live in metropolises by 2030—cities must provide critical services while facing population growth, aging infrastructure, and extreme weather caused by climate change. Kay, who has also studied how microscopic sea creatures could help researchers design zero-energy buildings, believes nature’s time-tested solutions may offer a path toward more adaptive urban systems.

Officially known as Physarum polycephalum, slime mold is neither plant, animal, nor fungus but a single-­celled organism older than dinosaurs. When searching for food, it extends tentacle-like projections in multiple directions simultaneously. It then doubles down on the most efficient paths that lead to food while abandoning less productive routes. This process creates optimized networks that balance efficiency with resilience—a sought-after quality in transportation and infrastructure systems.

The organism’s ability to find the shortest path between multiple points while maintaining backup connections has made it a favorite among researchers studying network design. Most famously, in 2010 researchers at Hokkaido University reported results from an experiment in which they dumped a blob of slime mold onto a detailed map of Tokyo’s railway system, marking major stations with oat flakes. At first the brainless organism engulfed the entire map. Days later, it had pruned itself back, leaving behind only the most efficient pathways. The result closely mirrored Tokyo’s actual rail network.

Since then, researchers worldwide have used slime mold to solve mazes and even map the dark matter holding the universe together. Experts across Mexico, Great Britain, and the Iberian peninsula have tasked the organism with redesigning their roadways—though few of these experiments have translated into real-world upgrades.

Historically, researchers working with the organism would print a physical map and add slime mold onto it. But Kay believes that Mireta’s approach, which replicates slime mold’s pathway-building without requiring actual organisms, could help solve more complex problems. Slime mold is visible to the naked eye, so Kay’s team studied how the blobs behave in the lab, focusing on the key behaviors that make these organisms so good at creating efficient networks. Then they translated these behaviors into a set of rules that became an algorithm.

Some experts aren’t convinced. According to Geoff Boeing, an associate professor at the University of Southern California’s Department of Urban Planning and Spatial Analysis, such algorithms don’t address “the messy realities of entering a room with a group of stakeholders and co-visioning a future for their community.” Modern urban planning problems, he says, aren’t solely technical issues: “It’s not that we don’t know how to make infrastructure networks efficient, resilient, connected—it’s that it’s politically challenging to do so.”

Michael Batty, a professor emeritus at University College London’s Centre for Advanced Spatial Analysis, finds the concept more promising. “There is certainly potential for exploration,” he says, noting that humans have long drawn parallels between biological systems and cities. For decades now, designers have looked to nature for ideas—think ventilation systems inspired by termite mounds or bullet trains modeled after the kingfisher’s beak

Like Boeing, Batty worries that such algorithms could reinforce top-down planning when most cities grow from the bottom up. But for Kay, the algorithm’s beauty lies in how it mimics bottom-up biological growth—like the way slime mold starts from multiple points and connects organically rather than following predetermined paths. 

Since launching earlier this year, Mireta, which is based in Cambridge, Massachusetts, has worked on about five projects. And slime mold is just the beginning. The team is also looking at algorithms inspired by ants, which leave chemical trails that strengthen with use and have their own decentralized solutions for network optimization. “Biology has solved just about every network problem you can imagine,” says Kay.

Elissaveta M. Brandon is an independent journalist interested in how design, culture, and technology shape the way we live.