Google DeepMind’s new AI tool helped create more than 700 new materials

From EV batteries to solar cells to microchips, new materials can supercharge technological breakthroughs. But discovering them usually takes months or even years of trial-and-error research. 

Google DeepMind hopes to change that with a new tool that uses deep learning to dramatically speed up the process of discovering new materials. Called graphical networks for material exploration (GNoME), the technology has already been used to predict structures for 2.2 million new materials, of which more than 700 have gone on to be created in the lab and are now being tested. It is described in a paper published in Nature today. 

Alongside GNoME, Lawrence Berkeley National Laboratory also announced a new autonomous lab. The lab takes data from the materials database that includes some of GNoME’s discoveries and uses machine learning and robotic arms to engineer new materials without the help of humans. Google DeepMind says that together, these advancements show the potential of using AI to scale up the discovery and development of new materials.

GNoME can be described as AlphaFold for materials discovery, according to Ju Li, a materials science and engineering professor at the Massachusetts Institute of Technology. AlphaFold, a DeepMind AI system announced in 2020, predicts the structures of proteins with high accuracy and has since advanced biological research and drug discovery. Thanks to GNoME, the number of known stable materials has grown almost tenfold, to 421,000.

“While materials play a very critical role in almost any technology, we as humanity know only a few tens of thousands of stable materials,” said Dogus Cubuk, materials discovery lead at Google DeepMind, at a press briefing. 

To discover new materials, scientists combine elements across the periodic table. But because there are so many combinations, it’s inefficient to do this process blindly. Instead, researchers build upon existing structures, making small tweaks in the hope of discovering new combinations that hold potential. However, this painstaking process is still very time consuming. Also, because it builds on existing structures, it limits the potential for unexpected discoveries. 

To overcome these limitations, DeepMind combines two different deep-learning models. The first generates more than a billion structures by making modifications to elements in existing materials. The second, however, ignores existing structures and predicts the stability of new materials purely on the basis of chemical formulas. The combination of these two models allows for a much broader range of possibilities. 

Once the candidate structures are generated, they are filtered through DeepMind’s GNoME models. The models predict the decomposition energy of a given structure, which is an important indicator of how stable the material can be. “Stable” materials do not easily decompose, which is important for engineering purposes. GNoME selects the most promising candidates, which go through further evaluation based on known theoretical frameworks.

This process is then repeated multiple times, with each discovery incorporated into the next round of training.

In its first round, GNoME predicted different materials’ stability with a precision of around 5%, but it increased quickly throughout the iterative learning process. The final results showed GNoME managed to predict the stability of structures over 80% of the time for the first model and 33% for the second. 

Using AI models to come up with new materials is not a novel idea. The Materials Project, a program led by Kristin Persson at Berkeley Lab, has used similar techniques to discover and improve the stability of 48,000 materials. 

However, GNoME’s size and precision set it apart from previous efforts. It was trained on at least an order of magnitude more data than any previous model, says Chris Bartel, an assistant professor of chemical engineering and materials science at the University of Minnesota. 

Doing similar calculations has previously been expensive and limited in scale, says Yifei Mo, an associate professor of materials science and engineering at the University of Maryland. GNoME allows these computations to scale up with higher accuracy and at much less computational cost, Mo says: “The impact can be huge.”

Once new materials have been identified, it is equally important to synthesize them and prove their usefulness. Berkeley Lab’s new autonomous laboratory, named the A-Lab, has been using some of GNoME’s discoveries with the Materials Project information, integrating robotics with machine learning to optimize the development of such materials.

The lab is capable of making its own decisions about how to make a proposed material and creates up to five initial formulations. These formulations are generated by a machine-learning model trained on existing scientific literature. After each experiment, the lab uses the results to adjust the recipes.

Researchers at Berkeley Lab say that A-Lab was able to perform 355 experiments over 17 days and successfully synthesized 41 out of 58 proposed compounds. This works out to two successful syntheses a day.

In a typical, human-led lab, it takes much longer to make materials. “If you’re unlucky, it can take months or even years,” said Persson at a press briefing. Most students give up after a few weeks, she said. “But the A-Lab doesn’t mind failing. It keeps trying and trying.”

Researchers at DeepMind and Berkeley Lab say these new AI tools can help accelerate hardware innovation in energy, computing, and many other sectors.

“Hardware, especially when it comes to clean energy, needs innovation if we are going to solve the climate crisis,” says Persson. “This is one aspect of accelerating that innovation.”

Bartel, who was not involved in the research, says that these materials will be promising candidates for technologies spanning batteries, computer chips, ceramics, and electronics. 

Lithium-ion battery conductors are one of the most promising use cases. Conductors play an important role in batteries by facilitating the flow of electric current between various components. DeepMind says GNoME identified 528 promising lithium-ion conductors among other discoveries, some of which may help make batteries more efficient. 

However, even after new materials are discovered, it usually takes decades for industries to take them to the commercial stage. “If we can reduce this to five years, that will be a big improvement,” says Cubuk.

Correction: This story has been updated to make clear where the lab’s data comes from.

That wasn’t Google I/O — it was Google AI

Things got weird at yesterday’s Google I/O conference right from the jump, when the duck hit the stage.  

The day began with a musical performance described as a “generative AI experiment featuring Dan Deacon and Google’s MusicLM, Phenaki, and Bard AI tools.” It wasn’t clear exactly how much of it was machine-made and how much was human. There was a long, lyrically rambling dissertation about meeting a duck with lips. Deacon informed the audience that we were all in a band called Chiptune and launched into a song with various chiptune riffs layered on top of each other. Later he had a song about oat milk? I believe the lyrics were entirely AI generated. Someone wearing a duck suit with lipstick came out and danced on stage. It was all very confusing. 

Then again, everything about life in the AI era is a bit confusing and weird. And this was, no doubt, the AI show. It was Google I/O as Google AI. So much so that on Twitter, the internet’s comment section, person after person used #GoogleIO to complain about all the AI talk, and exhorted Google to get on with it and get to the phones. (There was an eagerly anticipated new phone, the Pixel Fold. It folds.) 

Yet when Google CEO Sundar Pichai, who once ran the company’s efforts with Android, stepped on stage, he made it clear what he was there to talk about. It wasn’t a new phone—it was AI. He opened by going straight at the ways AI is in everything the company does now. With generative AI, he said, “we are reimagining all our core products, including Search.” 

I don’t think that’s quite right. 

At Google in 2023, it seems pretty clear that AI itself now is the core product. Or at least it’s the backbone of that product, a key ingredient that manifests itself in different forms. As my colleague Melissa Heikkilä put it in her report on the company’s efforts: Google is throwing generative AI at everything

The company made this point in one demo after another, all morning long. A Gmail demo showed how generative AI can compose an elaborate email to an airline to help you get a refund. The new Magic Editor in Google Photos will not only remove unwanted elements but reposition people and objects in photos, make the sky brighter and bluer, and then adjust the lighting in the photo so that all that doctoring looks natural. 

In Docs, the AI will create a full job description from just a few words. It will generate spreadsheets. Help you plan your vacation in Search, adjust the tone of your text messages to be more professional (or more personable), give you an “immersive view” in Maps, summarize your email, write computer code, seamlessly translate lip-sync videos. It is so deeply integrated into not only the Android operating system but the hardware itself that Google now makes “the only phone with AI at its center,” as Google’s Rick Osterloh said in describing the G2 chip. Phew. 

Google I/O is a highly, highly scripted event. For months now the company has faced criticism that its AI efforts were being outpaced by the likes of OpenAI’s ChatGPT or Microsoft Bing. Alarm bells were sounding internally, too. Today felt like a long-planned answer to that. Taken together, the demos came across as a kind of flex—a way to show what the company has under the hood and how it can deploy that technology throughout its existing, massively popular products (Pichai noted that the company has five different products with more than 2 billion users). 

And yet at the same time, it is clearly trying to walk a line, showing off what it can do but in ways that won’t, you know, freak everyone out.

Three years ago, the company forced out Timnit Gebru, the co-lead of its ethical AI team, essentially over a paper that raised concerns about the dangers of large language models. Gebru’s concerns have since become mainstream. Her departure, and the fallout from it, marked a turning point in the conversation about the dangers of unchecked AI. One would hope Google learned from it; from her. 

And then, just last week, Geoffrey Hinton announced he was stepping down from Google, in large part so he’d be free to sound the alarm bell about the dire consequences of rapid advancements in AI that he fears could soon enable it to surpass human intelligence. (Or, as Hinton put it, it is “quite conceivable that humanity is just a passing phase in the evolution of intelligence.”) 

And so, I/O yesterday was a far cry from the event in 2018, when the company gleefully demonstrated Duplex, showcasing how Google Assistant could make automated calls to small businesses without ever letting the people on those calls know they were interacting with an AI. It was an incredible demo. And one that made very many people deeply uneasy.

Again and again at this year’s I/O, we heard about responsibility. James Manyika, who leads the company’s technology and society program, opened by talking about the wonders AI has wrought, particularly around protein folding, but was quick to transition to the ways the company is thinking about misinformation, noting how it would watermark generated images and alluding to guardrails to prevent their misuse. 

There was a demo of how Google can deploy image provenance to counter misinformation, debunking an image search effectively by showing the first time it (in the example on stage, a fake photo purporting to show that the moon landing was a hoax) was indexed. It was a little bit of grounding amidst all the awe and wonder, operating at scale. 

And then … on to the phones. The new Google Pixel Fold scored the biggest applause line of the day. People like gadgets.

The phone may fold, but for me it was among the least mind-bending things I saw all day. And in my head, I kept returning to one of the earliest examples we saw: a photo of a woman standing in front of some hills and a waterfall

Magic Editor erased her backpack strap. Cool! And it also made the cloudy sky look a lot more blue. Reinforcing this, in another example—this time with a child sitting on a bench holding balloons—Magic Editor once again made the day brighter and then adjusted all the lighting in the photos so the sunshine would look more natural. More real than real.

How far do we want to go here? What’s the end goal we are aiming for? Ultimately, do we just skip the vacation altogether and generate some pretty, pretty pictures? Can we supplant our memories with sunnier, more idealized versions of the past? Are we making reality better? Is everything more beautiful? Is everything better? Is this all very, very cool? Or something else? Something we haven’t realized yet?

When my dad was sick, I started Googling grief. Then I couldn’t escape it.

I’ve always been a super-Googler, coping with uncertainty by trying to learn as much as I can about whatever might be coming. That included my father’s throat cancer. Initially I focused on the purely medical. I endeavored to learn as much as I could about molecular biomarkers, transoral robotic surgeries, and the functional anatomy of the epiglottis. 

Then, as grief started to become a likely scenario, it too got the same treatment. It seemed that one of the pillars of my life, my dad, was about to fall, and I grew obsessed with trying to understand and prepare for that. 

I am a mostly visual thinker, and thoughts pose as scenes in the theater of my mind. When my many supportive family members, friends, and colleagues asked how I was doing, I’d see myself on a cliff, transfixed by an omniscient fog just past its edge. I’m there on the brink, with my parents and sisters, searching for a way down. In the scene, there is no sound or urgency and I am waiting for it to swallow me. I’m searching for shapes and navigational clues, but it’s so huge and gray and boundless. 

I wanted to take that fog and put it under a microscope. I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, perusing personal disaster while I waited for coffee or watched Netflix. How will it feel? How will I manage it?

I started, intentionally and unintentionally, consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials. It was as if the internet secretly teamed up with my compulsions and started indulging my own worst fantasies; the algorithms were a sort of priest, offering confession and communion. 

Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss. 

I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us? 

I’m well aware of the power of algorithms—I’ve written about the mental-health impact of Instagram filters, the polarizing effect of Big Tech’s infatuation with engagement, and the strange ways that advertisers target specific audiences. But in my haze of panic and searching, I initially felt that my algorithms were a force for good. (Yes, I’m calling them “my” algorithms, because while I realize the code is uniform, the output is so intensely personal that they feel like mine.) They seemed to be working with me, helping me find stories of people managing tragedy, making me feel less alone and more capable. 

In my haze of panic and searching, I initially felt that my algorithms were a force for good. They seemed to be working with me, making me feel less alone and more capable. 

In reality, I was intimately and intensely experiencing the effects of an advertising-driven internet, which Ethan Zuckerman, the renowned internet ethicist and professor of public policy, information, and communication at the University of Massachusetts at Amherst, famously called “the Internet’s Original Sin” in a 2014 Atlantic piece. In the story, he explained the advertising model that brings revenue to content sites that are most equipped to target the right audience at the right time and at scale. This, of course, requires “moving deeper into the world of surveillance,” he wrote. This incentive structure is now known as “surveillance capitalism.” 

Understanding how exactly to maximize the engagement of each user on a platform is the formula for revenue, and it’s the foundation for the current economic model of the web. 

In principle, most ad targeting still exploits basic methods like segmentation, where people grouped by characteristics such as gender, age, and location are served content akin to what others in their group have engaged with or liked. 

But in the eight and half years since Zuckerman’s piece, artificial intelligence and the collection of ever more data have made targeting exponentially more personalized and chronic. The rise of machine learning has made it easier to direct content on the basis of digital behavioral data points rather than demographic attributes. These can be “stronger predictors than traditional segmenting,” according to Max Van Kleek, a researcher on human-computer interaction at the University of Oxford. Digital behavior data is also very easy to access and accumulate. The system is incredibly effective at capturing personal data—each click, scroll, and view is documented, measured, and categorized.  

Simply put, the more that Instagram and Amazon and the other various platforms I frequented could entangle me in webs of despair for ever more minutes and hours of my day, the more content and the more ads they could serve me. 

Whether you’re aware of it or not, you’re also probably caught in a digital pattern of some kind. These cycles can quickly turn harmful, and I spent months asking experts how we can get more control over rogue algorithms. 

A history of grieving

This story starts at what I mistakenly thought was the end of a marathon—16 months after my dad went to the dentist for a toothache and hours later got a voicemail about cancer. That was really the only day I felt brave. 

The marathon was a 26.2-mile army crawl. By mile 3, all the skin on your elbows is ground up and there’s a paste of pink tissue and gravel on the pavement. It’s bone by mile 10. But after 33 rounds of radiation with chemotherapy, we thought we were at the finish line.  

Then this past summer, my dad’s cancer made a very unlikely comeback, with a vengeance, and it wasn’t clear whether it was treatable. 

Really, the sounds were the worst. The coughing, coughing, choking—Is he breathing? He’s not breathing, he’s not breathing—choking, vomit, cough. Breath.

That was the soundtrack as I started grieving my dad privately, prematurely, and voyeuristically. 

I began reading obituaries from bed in the morning.

The husband of a fellow Notre Dame alumna dropped dead during a morning run. I started checking her Instagram daily, trying to get a closer view. This drew me into #widowjourney and #youngwidow. Soon, Instagram began recommending the accounts of other widows. 

A friend gently suggested that I could maybe stop examining the fog. “Have you tried looking away?”

I stayed up all night sometime around Thanksgiving sobbing as I traveled through a rabbit hole about the death of Princess Diana. 

Sometime that month, my Amazon account gained a footer of grief-oriented book recommendations. I was invited to consider The Year of Magical Thinking, Crying in H Mart: A Memoir, and F*ck Death: An Honest Guide to Getting Through Grief Without the Condolences, Sympathy, and Other BS as I shopped for face lotion. 

Amazon’s website says its recommendations are “based on your interests.” The site explains, “We examine the items you’ve purchased, items you’ve told us you own, and items you’ve rated. We compare your activity on our site with that of other customers, and using this comparison, recommend other items that may interest you in Your Amazon.” (An Amazon spokesperson gave me a similar explanation and told me I could edit my browsing history.)

At some point, I had searched for a book on loss.

Content recommendation algorithms run on methods similar to ad targeting, though each of the major content platforms has its own formula for measuring user engagement and determining which posts are prioritized for different people. And those algorithms change all the time, in part because AI enables them to get better and better, and in part because platforms are trying to prevent users from gaming the system.

Sometimes it’s not even clear what exactly the recommendation algorithms are trying to achieve, says Ranjit Singh, a data and policy researcher at Data & Society, a nonprofit research organization focused on tech governance. “One of the challenges of doing this work is also that in a lot of machine-learning modeling, how the model comes up with the recommendation that it does is something that is even unclear to the people who coded the system,” he says.

This is at least partly why by the time I became aware of the cycle I had created, there was little I could do to quickly get out. All this automation makes it harder for individual users and tech companies alike to control and adjust the algorithms. It’s much harder to redirect an algorithm when it’s not clear why it’s serving certain content in the first place. 

When personalization becomes toxic

One night, I described my cliff phantasm to a dear friend as she drove me home after dinner. She had tragically lost her own dad. She gently suggested that I could maybe stop examining the fog. “Have you tried looking away?” she asked. 

Perhaps I could fix my gaze on those with me at this lookout and try to appreciate that we had not yet had to walk over the edge.

It was brilliant advice that my therapist agreed with enthusiastically. 

I committed to creating more memories at present with my family rather than spending so much time alone wallowing in what might come. I struck up conversations with my dad and told him stories I hadn’t before. 

I tried hard to bypass triggering stories on my feeds and regain focus when I started going down a rabbit hole. I stopped checking for updates from the widows and widowers I had grown attached to. I unfollowed them along with other content I knew was unhealthy.

But the more I tried to avoid it, the more it came to me. No longer a priest, my algorithms had become more like a begging dog. 

My Google mobile app was perhaps the most relentless, as it seemed to insightfully connect all my searching for cancer pathologies to stories of personal loss. In the home screen of my search app, which Google calls “Discover,” a YouTube video imploring me to “Trust God Even When Life Is Hard” would be followed by a Healthline story detailing the symptoms of bladder cancer. 

(As a Google spokesperson explained to me, “Discover helps you find information from high-quality sources about topics you’re interested in. Our systems are not designed to infer sensitive characteristics like health conditions, but sometimes content about these topics could appear in Discover”—I took this to mean that I was not supposed to be seeing the content I was—“and we’re working to make it easier for people to provide direct feedback and have even more control over what they see in their feed.”)

“There’s an assumption the industry makes that personalization is a positive thing,” says Singh. “The reason they collect all of this data is because they want to personalize services so that it’s exactly catered to what you want.” 

But, he cautions, this strategy is informed by two false ideas that are common among people working in the field. The first is that platforms ought to prioritize the individual unit, so that if a person wants to see extreme content, the platform should offer extreme content; the effect of that content on an individual’s health or on broader communities is peripheral. 

“There’s an assumption the industry makes that personalization is a positive thing.” 

The second is that the algorithm is the best judge of what content you actually want to see. 

For me, both assumptions were not just wrong but harmful. Not only were the various algorithms I interacted with no longer trusted mediators, but by the time I realized all my ideation was unhealthy, the web of content I’d been living in was overwhelming.   

I found that the urge to click loss-related prompts was inescapable, and at the same time, the content seemed to be getting more tragic. Next to articles about the midterm elections, I’d see advertisements for stories about someone who died unexpectedly just hours after their wedding and the increase in breast cancer in women under 30. 

“These algorithms can ‘rabbit hole’ users into content that can feel detrimental to their mental health,” says Nina Vasan, the founder and executive director of Brainstorm, a Stanford mental-health lab. “For example, you can feel inundated with information about cancer and grief, and that content can get increasingly emotionally extreme.”

Eventually, I deleted the Instagram and Twitter apps from my phone altogether. I stopped looking at stories suggested by Google. Afterwards, I felt lighter and more present. The fog seemed further out.

The internet doesn’t forget

My dad started to stabilize by early winter, and I began to transition from a state of crisis to one of tentative normalcy (though still largely app-less). I also went back to work, which requires a lot of time online. 

The internet is less forgetful than people; that’s one of its main strengths. But harmful effects of digital permanence have been widely exposed—for example, there’s the detrimental impact that a documented adolescence has on identity as we age. In one particularly memorable essay, Wired’s Lauren Goode wrote about how various apps kept re-upping old photos and wouldn’t let her forget that she was once meant to be a bride after she called off her wedding. 

When I logged back on, my grief-obsessed algorithms were waiting for me with a persistence I had not anticipated. I just wanted them to leave me alone.

As Singh notes, fulfilling that wish raises technical challenges. “At a particular moment of time, this was a good recommendation for me, but it’s not now. So how do I actually make that difference legible to an algorithm or a recommendation system? I believe that it’s an unanswered question,” he says. 

Oxford’s Van Kleek echoes this, explaining that managing upsetting content is a hugely subjective challenge, which makes it hard to deal with technically. “The exposure to a single piece of information can be completely harmless or deeply harmful depending on your experience,” he says. It’s quite hard to deal with that subjectivity when you consider just how much potentially triggering information is on the web.

We don’t have tools of transparency that allow us to understand and manage what we see online, so we make up theories and change our scrolling behavior accordingly. (There’s an entire research field around this behavior, called “algorithmic folk,” which explores all the conjectures we make as we try to decipher the algorithms that sort our digital lives.) 

I supposed not clicking or looking at content centered on trauma and cancer ought to do the trick eventually. I’d scroll quickly past a post about a brain tumor on my Instagram’s “For you” page, as if passing an old acquaintance I was trying to avoid on the street. 

It did not really work. 

“Most of these companies really fiddle with how they define engagement. So it can vary from one time in space to another, depending on how they’re defining it from month to month,” says Robyn Caplan, a social media researcher at Data & Society. 

Many platforms have begun to build in features to give users more control over their recommendations. “There are a lot more mechanisms than we realize,” Caplan adds, though using those tools can be confusing. “You should be able to break free of something that you find negative in your life in online spaces. There are ways that these companies have built that in, to some degree. We don’t always know whether they’re effective or not, or how they work.” Instagram, for instance, allows you to click “Not interested” on suggested posts (though I admit I never tried to do it). A spokesperson for the company also suggested that I adjust the interests in my account settings to better curate my feed.

By this point, I was frustrated that I was having such a hard time moving on. Cancer sucks so much time, emotion, and energy from the lives and families it affects, and my digital space was making it challenging to find balance. While searching Twitter for developments on tech legislation for work, I’d be prompted with stories about a child dying of a rare cancer. 

I resolved to be more aggressive about reshaping my digital life. 

How to better manage your digital space

I started muting and unfollowing accounts on Instagram when I’d scroll pass triggering content, at first tentatively and then vigorously. A spokesperson for Instagram sent over a list of helpful features that I could use, including an option to snooze suggested posts and to turn on reminders to “take a break” after a set period of time on the app. 

I cleared my search history on Google and sought out Twitter accounts related to my professional interests. I adjusted my recommendations on Amazon (Account > Recommendations > Improve your recommendations) and cleared my browsing history. 

I also capitalized on my network of sources—a privilege of my job that few in similar situations would have—and collected a handful of tips from researchers about how to better control rogue algorithms. Some I knew about; others I didn’t. 

Everyone I talked to told me I had been right to assume that it works to stop engaging with content I didn’t want to see, though they emphasized that it takes time. For me, it has taken months. It also has required that I keep exposing myself to harmful content and manage any triggering effects while I do this—a reality that anyone in a similar situation should be aware of. 

Relatedly, experts told me that engaging with content you do want to see is important. Caplan told me she personally asked her friends to tag her and DM her with happy and funny content when her own digital space grew overwhelming. 

“That is one way that we kind of reproduce the things that we experience in our social life into online spaces,” she says. “So if you’re finding that you are depressed and you’re constantly reading sad stories, what do you do? You ask your friends, ‘Oh, what’s a funny show to watch?’”

Another strategy experts mentioned is obfuscation—trying to confuse your algorithm. Tactics include liking and engaging with alternative content, ideally related to topics that the platform might have a plethora of further suggestions—like dogs, gardening, or political news. (I personally chose to engage with accounts related to #DadHumor, which I do not regret.) Singh recommended handing over the account to a friend for a few days with instructions to use it however might be natural for them, which can help you avoid harmful content and also throw off the algorithm. 

You can also hide from your algorithms by using incognito mode or private browsers, or by regularly clearing browsing histories and cookies (this is also just good digital hygiene). I turned off “Personal results” on my Google iPhone app, which helped immensely. 

One of my favorite tips was to “embrace the Finsta,” a reference to fake Instagram accounts. Not only on Instagram but across your digital life, you can make multiple profiles dedicated to different interests or modes. I created multiple Google accounts: one for my personal life, one for professional content, another for medical needs. I now search, correspond, and store information accordingly, which has made me more organized and more comfortable online in general. 

All this is a lot of work and requires a lot of digital savvy, time, and effort from the end user, which in and of itself can be harmful. Even with the right tools, it’s incredibly important to be mindful of how much time you spend online. Research findings are overwhelming at this point: too much time on social media leads to higher rates of depression and anxiety. 

“For most people, studies suggest that spending more than one hour a day on social media can make mental health worse. Overall there is a link between increase in time spent on social media and worsening mental health,” says Stanford’s Vasan. She recommends taking breaks to reset or regularly evaluating how your time spent online is making you feel. 

A clean scan

Cancer does not really end—you just sort of slowly walk out of it, and I am still navigating stickiness across the personal, social, and professional spheres of my life. First you finish treatment. Then you get an initial clean scan. The sores start to close—though the fatigue lasts for years. And you hope for a second clean scan, and another after that. 

The faces of doctors and nurses who carried you every day begin to blur in your memory. Sometime in December, topics like work and weddings started taking up more time than cancer during conversations with friends. 

What I actually want is to control when I look at information about disease, grief, and anxiety.

My dad got a cancer-free scan a few weeks ago. My focus and creativity have mostly returned and I don’t need to take as many breaks. I feel anxiety melting out of my spine in a slow, satisfying drip.

And while my online environment has gotten better, it’s still not perfect. I’m no longer traveling down rabbit holes of tragedy. I’d say some of my apps are cleansed; some are still getting there. The advertisements served to me across the web often still center on cancer or sudden death. But taking an active approach to managing my digital space, as outlined above, has dramatically improved my experience online and my mental health overall. 

Still, I remain surprised at just how harmful and inescapable my algorithms became while I was struggling this fall. Our digital lives are an inseparable part of how we experience the world, but the mechanisms that reinforce our subconscious behaviors or obsessions, like recommendation algorithms, can make our digital experience really destructive. This, of course, can be particularly damaging for people struggling with issues like self-harm or eating disorders—even more so if they’re young. 

With all this in mind, I’m very deliberate these days about what I look at and how. 

What I actually want is to control when I look at information about disease, grief, and anxiety. I’d actually like to be able to read about cancer, at appropriate times, and understand the new research coming out. My dad’s treatment is fairly new and experimental. If he’d gotten the same diagnosis five years ago, it most certainly would have been a death sentence. The field is changing, and I’d like to stay on top of it. And when my parents do pass away, I want to be able to find support online. 

But I won’t do any of it the same way. For a long time, I was relatively dismissive of alternative methods of living online. It seemed burdensome to find new ways of doing everyday things like searching, shopping, and following friends—the power of tech behemoths is largely in the ease they guarantee. 

Indeed, Zuckerman tells me that the challenge now is finding practical substitute digital models that empower users. There are viable options; user control over data and platforms is part of the ethos behind hyped concepts like Web3. Van Kleek says the reignition of the open-source movement in recent years makes him hopeful: increased transparency and collaboration on projects like Mastodon, the burgeoning Twitter alternative, might give less power to the algorithm and more power to the user. 

“I would suggest that it’s not as bad as you fear. Nine years ago, complaining about an advertising-based web was a weird thing to be doing. Now it’s a mainstream complaint,” Zuckerman recently wrote to me in an email. “We just need to channel that dissatisfaction into actual alternatives and change.” 

My biggest digital preoccupation these days is navigating the best way to stay connected with my dad over the phone now that I am back in my apartment 1,200 miles away. Cancer stole the “g” from “Good morning, ball player girl,” his signature greeting, when it took half his tongue. 

I still Google things like “How to clean a feeding tube” and recently watched a YouTube video to refresh my memory of the Heimlich maneuver. But now I use Tor

Clarification: This story has been updated to reflect that the explanation of Amazon’s recommendations on its site refers to its recommendation algorithm generally, not specifically its advertising recommendations.

How the Supreme Court ruling on Section 230 could end Reddit as we know it

When the Supreme Court hears a landmark case on Section 230 later in February, all eyes will be on the biggest players in tech—Meta, Google, Twitter, YouTube.

A legal provision tucked into the Communications Decency Act, Section 230 has provided the foundation for Big Tech’s explosive growth, protecting social platforms from lawsuits over harmful user-generated content while giving them leeway to remove posts at their discretion (though they are still required to take down illegal content, such as child pornography, if they become aware of its existence). The case might have a range of outcomes; if Section 230 is repealed or reinterpreted, these companies may be forced to transform their approach to moderating content and to overhaul their platform architectures in the process.

But another big issue is at stake that has received much less attention: depending on the outcome of the case, individual users of sites may suddenly be liable for run-of-the-mill content moderation. Many sites rely on users for community moderation to edit, shape, remove, and promote other users’ content online—think Reddit’s upvote, or changes to a Wikipedia page. What might happen if those users were forced to take on legal risk every time they made a content decision? 

In short, the court could change Section 230 in ways that won’t just impact big platforms; smaller sites like Reddit and Wikipedia that rely on community moderation will be hit too, warns Emma Llansó, director of the Center for Democracy and Technology’s Free Expression Project. “It would be an enormous loss to online speech communities if suddenly it got really risky for mods themselves to do their work,” she says. 

In an amicus brief filed in January, lawyers for Reddit argued that its signature upvote/downvote feature is at risk in Gonzalez v. Google, the case that will reexamine the application of Section 230. Users “directly determine what content gets promoted or becomes less visible by using Reddit’s innovative ‘upvote’ and ‘downvote’ features,” the brief reads. “All of those activities are protected by Section 230, which Congress crafted to immunize Internet ‘users,’ not just platforms.” 

At the heart of Gonzalez is the question of whether the “recommendation” of content is different from the display of content; this is widely understood to have broad implications for recommendation algorithms that power platforms like Facebook, YouTube, and TikTok. But it could also have an impact on users’ rights to like and promote content in forums where they act as community moderators and effectively boost some content over other content. 

Reddit is questioning where user preferences fit, either directly or indirectly, into the interpretation of “recommendation.” “The danger is that you and I, when we use the internet, we do a lot of things that are short of actually creating the content,” says Ben Lee, Reddit’s general counsel. “We’re seeing other people’s content, and then we’re interacting with it. At what point are we ourselves, because of what we did, recommending that content?” 

Reddit currently has 50 million active daily users, according to its amicus brief, and the site sorts its content according to whether users upvote or downvote posts and comments in a discussion thread. Though it does employ recommendation algorithms to help new users find discussions they might be interested in, much of its content recommendation system relies on these community-powered votes. As a result, a change to community moderation would likely drastically change how the site works.  

“Can we [users] be dragged into a lawsuit, even a well-meaning lawsuit, just because we put a two-star review for a restaurant, just because like we clicked downvote or upvote on that one post, just because we decided to help volunteer for our community and start taking out posts or adding in posts?” Lee asks. “Are [these actions] enough for us to suddenly become liable for something?”

An “existential threat” to smaller platforms 

Lee points to a case in Reddit’s recent history. In 2019, in the subreddit r/Screenwriting, users started discussing screenwriting competitions they thought might be scams. The operator of those alleged scams went on to sue the moderator of r/Screenwriting for pinning and commenting on the posts, thus prioritizing that content. The Superior Court of California in LA County excused the moderator from the lawsuit, which Reddit says was due to Section 230 protection. Lee is concerned that a different interpretation of Section 230 could leave moderators, like the one in r/Screenwriting, significantly more vulnerable to similar lawsuits in the future. 

“The reality is every Reddit user plays a role in deciding what content appears on the platform,” says Lee. “In that sense, weakening 230 can unintentionally increase liability for everyday people.” 

Llansó agrees that Section 230 explicitly protects the users of platforms, as well as the companies that host them. 

“Community moderation is often some of the most effective [online moderation] because it has people who are invested,” she says. “It’s often … people who have context and understand what people in their community do and don’t want to see.”

Wikimedia, the foundation that manages Wikipedia, is also worried that a new interpretation of Section 230 might usher in a future in which volunteer editors can be taken to court for how they deal with user-generated content. All the information on Wikipedia is generated, fact-checked, edited, and organized by volunteers, making the site particularly vulnerable to changes in liability afforded by Section 230. 

“Without Section 230, Wikipedia could not exist,” says Jacob Rogers, associate general counsel at the Wikimedia Foundation. He says the community of volunteers that manages content on Wikipedia “designs content moderation policies and processes that reflect the nuances of sharing free knowledge with the world. Alterations to Section 230 would jeopardize this process by centralizing content moderation further, eliminating communal voices, and reducing freedom of speech.”

In its own brief to the Supreme Court, Wikimedia warned that changes to liability will leave smaller technology companies unable to compete with the bigger companies that can afford to fight a host of lawsuits. “The costs of defending suits challenging the content hosted on Wikimedia Foundation’s sites would pose existential threats to the organization,” lawyers for the foundation wrote.

Lee echoes this point, noting that Reddit is “committed to maintaining the integrity of our platform regardless of the legal landscape,” but that Section 230 protects smaller internet companies that don’t have large litigation budgets, and any changes to the law would “make it harder for platforms and users to moderate in good faith.”

To be sure, not all experts think the scenarios laid out by Reddit and Wikimedia are the most likely. “This could be a bit of a mess, but [tech companies] almost always say that this is going to destroy the internet,” says Hany Farid, professor of engineering and information at the University of California, Berkeley. 

Farid supports increasing liability related to content moderation and argues that the harms of targeted, data-driven recommendations online justify some of the risks that come with a ruling against Google in the Gonzalez case. “It is true that Reddit has a different model for content moderation, but what they aren’t telling you is that some communities are moderated by and populated by incels, white supremacists, racists, election deniers, covid deniers, etc.,” he says. 

(In response to Farid’s statement, a Reddit spokesperson writes, “our sitewide policies strictly prohibit hateful content—including hate based on gender or race—as well as content manipulation and disinformation.”)

Brandie Nonnecke, founding director at the CITRIS Policy Lab, a social media and democracy research organization at the University of California, Berkeley, emphasizes a common viewpoint among experts: that regulation to curb the harms of online content is needed but should be established legislatively, rather than through a Supreme Court decision that could result in broad unintended consequences, such as those outlined by Reddit and Wikimedia.  

“We all agree that we don’t want recommender systems to be spreading harmful content,” Nonnecke says, “but trying to address it by changing Section 230 in this very fundamental way is like a surgeon using a chain saw instead of a scalpel.”

Correction: The Wikimedia Foundation was established two years after Wikipedia was launched, not before, as originally written.

This piece has also been updated to include an additional statement from Reddit.

Big Tech could help Iranian protesters by using an old tool

After the Iranian government took extreme measures to limit internet use in response to the pro-democracy protests that have filled Iranian streets since mid-September, Western tech companies scrambled to help restore access to Iranian citizens. 

Signal asked its users to help run proxy servers with support from the company. Google offered credits to help Iranians get online using Outline, the company’s own VPN. And in response to a post by US Secretary of State Antony Blinken on Iran’s censorship, Elon Musk quickly tweeted: “Activating Starlink …

But these workarounds aren’t enough. Though the first Starlink satellites have been smuggled into Iran, restoring the internet will likely require several thousand more. Signal tells MIT Technology Review that it has been vexed by “Iranian telecommunications providers preventing some SMS validation codes from being delivered.” And Iran has already detected and shut down Google’s VPN, which is what happens when any single VPN grows too popular (plus, unlike most VPNs, Outline costs money).

What’s more, “there’s no reliable mechanism for Iranian users to find these proxies,” Nima Fatemi, head of global cybersecurity nonprofit Kandoo, points out. They’re being promoted on social media networks that are themselves banned in Iran. “While I appreciate their effort,” he adds, “it feels half-baked and half-assed.”

There is something more that Big Tech could do, according to some pro-democracy activists and experts on digital freedom. But it has received little attention—even though it’s something several major service providers offered until just a few years ago.

“One thing people don’t talk about is domain fronting,” says Mahsa Alimardani, an internet researcher at the University of Oxford and Article19, a human rights organization focused on freedom of expression and information. It’s a technique developers used for years to skirt internet restrictions like those that have made it incredibly difficult for Iranians to communicate safely. In essence, domain fronting allows apps to disguise traffic directed toward them; for instance, when someone types a site into a web browser, this technique steps into that bit of browser-to-site communication and can scramble what the computer sees on the back end to disguise the end site’s true identity.

In the days of domain fronting, “cloud platforms were used for circumvention,” Alimardani explains. From 2016 to 2018, secure messaging apps like Telegram and Signal used the cloud hosting infrastructure of Google, Amazon, and Microsoft—which most of the web runs on—to disguise user traffic and successfully thwart bans and surveillance in Russia and across the Middle East.

But Google and Amazon discontinued the practice in 2018, following pushback from the Russian government and citing security concerns about how it could be abused by hackers. Now activists who work at the intersection of human rights and technology say reinstating the technique, with some tweaks, is a tool Big Tech could use to quickly get Iranians back online.

Domain fronting “is a good place to start” if tech giants really want to help, Alimardani says. “They need to be investing in helping with circumvention technology, and having stamped out domain fronting is really not a good look.”

Domain fronting could be a critical tool to help protesters and activists stay in touch with each other for planning and safety purposes, and to allow them to update worried family and friends during a dangerous period. “We recognize the possibility that we might not come back home every time we go out,” says Elmira, an Iranian woman in her 30s who asked to be identified only by her first name for security reasons.

Still, no major companies have publicly said they will consider launching or restoring the anti-censorship tool. Two of the three major service providers that previously allowed domain fronting, Google and Microsoft, could not be reached for comment. The third, Amazon, directed MIT Technology Review to a 2019 blog post in which a product manager described steps the company has taken to minimize the “abusive use of domain fronting practices.”

“A cat-and-mouse game”

By now, Iranian citizens largely expect that their digital communications and searches are being combed through by the powers of the state. “They listen and control almost all communications in order to counter demonstrations,” says Elmira. “It’s like we’re being suffocated.”

This isn’t, broadly speaking, a new phenomenon in the country. But it’s reached a crisis point over the past two months, during a growing swell of anti-government protests sparked by the death of 22-year-old Mahsa Amini on September 16 after Iran’s Guidance Patrol—more commonly known as the morality police—arrested her for wearing her hijab improperly.

“The world realized that the matter of hijab, which I myself believe is a personal choice, could become an incident over which a young girl can lose her life,” Elmira says. 

According to rights groups, over 300 people, including at least 41 children, have been killed since protests began. The crackdown has been especially brutal in largely Kurdish western Iran, where Amini was from and Elmira now lives. Severely restricting internet access has been a way for the regime to further crush dissent. “This is not the first time that the internet services have been disrupted in Iran,” Elmira says. “The reason for this action is the government’s fear, because there is no freedom of speech here.”

The seeds of today’s digital repression trace back to 2006, when Iran announced plans to craft its own intranet—an exclusive, national network designed to keep Iranians off the World Wide Web. 

“This is really hard to do,” says Kian Vesteinsson, a senior analyst for the global democracy nonprofit Freedom House. That’s because it requires replicating global infrastructure with domestic resources while pruning global web access.

The payoff is “digital spaces that are easier to monitor and to control,” Vesteinsson says. Of the seven countries trying to isolate themselves from the global internet, Iran is the furthest along today.

Iran debuted its National Information Network in 2019, when authorities hit a national kill switch on the global web amid protests over gas prices. During a week when the country was electronically cut off from the rest of the world, the regime killed 1,500 people. The Iranian economy, which relies on broader connectivity to do business, lost over a billion US dollars during the bloody week. 

While recently Iran has intermittently cut access to the entire global internet in some regions, it hasn’t instituted another total global web shutdown. Instead, it is largely pursuing censorship strategies designed to crush dissent while sparing the economy. Rolling “digital curfews” are in place from about 4 p.m. into the early morning hours—ensuring that the web becomes incredibly difficult to access during the period when most protests occur.

The government has blocked most popular apps, including Twitter, Instagram, Facebook, and WhatsApp, in favor of local copycat apps where no message or search is private.

“The messaging apps we use, like WhatsApp, have a certain level of protection embedded in their coding,” Elmira says. “We feel more comfortable using them. [The government] cannot have control over them, and as a result, they restrict access.”

The Iranian regime is also aggressively shutting down VPNs, which were a lifeline for many Iranians and the country’s most popular censorship workaround. About 80% of Iranians use tools to bypass censorship and use apps they prefer. “Even my grandpa knows how to install a VPN app,” an Iranian woman who requested anonymity for safety reasons tells me. 

To crush VPN use, Iran’s government has invested heavily in “deep packet inspection,” a technology that peers into the fine print of internet traffic and can recognize and shut down nearly any VPN with time.

That’s created a “cat-and-mouse game,” says Alimardani, the internet researcher. “You need to be offering, like, thousands of VPNs,” she says, so that some will remain available as Iran diligently recognizes and blocks others. Without enough VPNs, activists aren’t left with many secure communication options, making it much harder for Iranians to coordinate protests and communicate with the outside world as death tolls climb.

Domain fronting to beat censors

Domain fronting works by concealing the app or website a user ultimately wants to reach. It’s sort of like putting a correctly addressed postcard in an envelope with a different, innocuous destination—then having someone at the fake-out address hand-deliver it.

The technique is attractive because it’s implemented by service providers rather than individuals, who may or may not be tech savvy. It also makes censorship more painful for governments to pursue. The only way to ban a domain-fronted app is to shut down the entire web hosting provider the app uses—bringing an avalanche of other apps and sites down with it. And since Microsoft, Amazon, and Google provide hosting services for most of the digital world, domain fronting by those companies would force countries to crash much of the internet in order to deny access to an undesired app.

“There’s no way to just pick out Telegram. That’s the power of it,” says Erik Hunstad, a security expert and CTO of the cybersecurity company SixGen.

Nevertheless, in April 2018, Russia blocked Amazon, Google, and a host of other popular services in order to ban the secure-messaging app Telegram, which initially used domain fronting to beat censors. These disruptions made the ban broadly unpopular with average Russians, not just activists who favored the app. 

The Russian government, in turn, exerted pressure on Amazon and Google to end the practice.

In April 2018, the companies terminated support for domain fronting altogether. “Amazon and Google just completely disabled this potentially extremely useful service,” Alimardani says. 

Google made the change quietly, but soon afterwards, it described domain fronting to the Verge as a “quirk” of its software. In its own announcement, Amazon said domain fronting could help malware masquerade as standard traffic. Hackers could also abuse the technique—the Russian hacker group APT29 has used domain fronting, alongside other means, to access classified data.

Still, Signal, which began using domain fronting in 2016 to operate in several Middle Eastern countries attempting to block the app, issued a statement at the time: “The censors in these countries will have (at least temporarily) achieved their goals.”

“While domain fronting still works with domains on smaller networks, this greatly limits the current utility of the technique,” says Simon Migliano, a digital privacy expert and head of research at Top10VPN, an independent VPN review website.

(Microsoft announced a ban on domain fronting in 2021, but the cloud infrastructure that enables the technique is intact. Earlier this week, Microsoft wrote that, going forward, it will “block any HTTP request that exhibits domain fronting behavior.”)

Migliano echoes Google in describing domain fronting as “essentially a bug,” and he admits it has “very real security risks.” It is “certainly a shame” that companies are revoking it, he says, “but you can understand their position.”

But Hunstad, who also works in cybersecurity, says there are ways to minimize the cybersecurity risks of domain fronting while preserving its use as an anti-censorship tool. He explains that the way networks process user requests means Google, Amazon, or Microsoft could easily greenlight the use of domain fronting for certain apps, like WhatsApp or Telegram, while otherwise banning the tactic.

Rather than technical limitations, Hunstad says, it’s a “prisoner’s dilemma situation [for] the big providers” that is keeping them from re-enabling domain fronting—they’re stuck between pressure from authoritarian governments and an outcry from activists. He speculates that financial imperatives are part of the calculus as well. 

“If I’m hosting my website with Google, and they decide to enable this for Signal and Telegram, or maybe across the board, and multiple countries decide to remove access to all of Google because of that—then I have potentially less reach,” Hunstad says. “I’ll just go to the provider that’s not doing it, and Google is going to have a business impact.” 

The likelihood that Amazon or Google will reinstate domain fronting depends on “how cynical you are about their profit motives versus their good intentions for the world,” Hunstad adds. 

What’s next

While Fatemi, from Kandoo, argues that restoring domain fronting would be helpful for Iranian protesters, he emphasizes that it wouldn’t be a silver bullet. 

“In the short term, if they can relax domain fronting so that people, for example, can use Signal, or people can connect to VPN connections, that would be phenomenal,” he says. He adds that to move solutions along more quickly, companies like Google could collaborate with nonprofits that specialize in deploying tech in vulnerable situations. 

But Big Tech companies also need to commit a bigger slice of their resources and talent to developing technologies that can beat internet censorship, he says: “[Domain fronting is] a Band-Aid on a much larger problem. If we want to go at a much larger problem, we have to dedicate engineers.” 

Until the world finds an enduring solution to authoritarian attempts to splinter the global web, tech companies that want to help people will be left scrambling for reactive tactics. 

“There needs to be a whole toolkit of different kinds of VPNs and circumvention tools right now, because what they are doing is highly sophisticated,” Alimardani says. “Google is one of the richest and most powerful companies in the world. And offering one VPN is really not enough.”

So for now, seven weeks into Iran’s protests, internet and VPN access remain throttled, restrictions show no sign of slowing, and domain fronting remains dead. And it’s the citizens on the front lines who have to carry the biggest burden.

“The conditions are dire here,” Elmira tells me. The lack of connectivity has made massacres difficult to verify and has complicated efforts to sustain protests and other activism. 

“To counter the demonstrations, they cut off our access to the internet and social media,” she says. 

But Elmira is resolute. “I, myself, and many of my friends now go out with no fear,” she says. “We know that they might shoot us. But it is worth taking this risk and to go out and try our best instead of staying home and continuing taking this.”

YouTube wants to take on TikTok and put its Shorts videos on your TV

YouTube Shorts, the video website’s TikTok-like feature, has become one of its latest obsessions, with more than 1.5 billion users watching short-form content on their devices every month.

And now YouTube wants to expand that number by bringing full-screen, vertical videos into your TV, MIT Technology Review can reveal.

From today, users worldwide will see a row of videos from Shorts high up their display on YouTube’s smart TV apps. The videos, which will be integrated into the standard homepage of YouTube’s TV app and will sit alongside longer, landscape videos, are presented on the basis of previous watch history, much as in the YouTube Shorts tab on cell phones and the YouTube website.

“It is challenging taking a format that’s traditionally a mobile format and finding the right way to bring it to life on TV,” says Brynn Evans, UX director for the YouTube app on TV.

The time spent developing the TV app integration is testament to the importance of Shorts to YouTube, says Melanie Fitzgerald, UX director at YouTube Community and Shorts. “Seeing the progression of short-form video over several years, from Vine to Musical.ly to TikTok to Instagram and to YouTube, it’s very clear this format is here to stay.”

One major challenge the designers behind YouTube Shorts’ TV integration had to consider was the extent to which Shorts videos should be allowed to autoplay. At present, the initial design will require viewers to manually scroll through Shorts videos once they’re playing and move on to the next one by pressing the up and down arrows on their TV remote.

“One piece we were playing with was how much do we want this to be a fully lean-back experience, where you turn it on and Shorts cycle through,” says Evans, whose team decided against that option at launch but does not rule out changing future iterations.

The design presents a single Shorts video at a time in the center of the TV screen, surrounded by white space that changes color depending on the overall look of the video.

One thing YouTube didn’t test—at least as of now? Filling the white space with ads. YouTube spokesperson Susan Cadrecha tells MIT Tech Review that the experience will initially be ad-free. The spokesperson did say that ads would likely be added at some point, but how those would be integrated into the Shorts on TV experience was not clear.

Likewise, the YouTube Shorts team is investigating how to integrate comments into TV viewing for future iterations of the app. “For a mobile format like this, you’d be able to maybe use your phone as a companion and leave some comments and they can appear on TV,” says Evans. 

YouTube’s announcement follows TikTok’s own move into developing a TV app. First launched in February 2021 in France, Germany, and the UK and expanded into the United States and elsewhere in November that year, TikTok’s smart TV app hasn’t largely altered how the main app works. (Nor, arguably, has it become an irreplaceable part of people’s living room habits.)

However, the shift to fold Shorts into the YouTube experience on TV suggests how important YouTube feels the short-form model is to its future. “It’s very clearly a battle for attention across devices,” says Andrew A. Rosen, founder and principal at media analyst Parqor. “The arrival of Shorts and TikTok on connected TVs makes the competitive landscape that much more complex.” Having ceded a head start to TikTok, YouTube now seems determined to play catchup.

The team behind the initiative still isn’t fully certain how adding short-form video into the YouTube on TV experience will be embraced. “It still remains to be seen how and when people will consume Shorts,” admits Evans—though she tells MIT Tech Review that informal polling and qualitative surveys, plus tests within the Google community, suggest “a very positive impression of Shorts from people who are watching YouTube on TV.” (YouTube declined to share its own data on much time the average user currently spends watching YouTube content on TV but did point to Nielsen data showing that viewers worldwide spent 700 million hours a day on that activity.)

“Will it be a game-changer in the living room? Yes and no,” says Rosen. “Yes in the sense that it will turn 15-second to 60-second clips into competition for every legacy media streaming service, and Netflix is betting billions on content to be consumed on those same TVs. No, because it’s not primed to become a new default of consumption.”

Google Launches Fuchsia
Google launches its third major operating system, Fuchsia

Google is officially rolling out a new operating system, called Fuchsia, to consumers. The release is a bit hard to believe at this point, but Google confirmed the news to 9to5Google, and several members of the Fuchsia team have confirmed it on Twitter. The official launch date was apparently yesterday. Fuchsia is certainly getting a quiet, anti-climactic release, as it’s only being made available to one device, the Google Home Hub, aka the first-generation Nest Hub. There are no expected changes to the UI or functionality of the Home Hub, but Fuchsia is out there. Apparently, Google simply wants to prove out the OS in a consumer environment.

Fuchsia’s one launch device was originally called the Google Home Hub and is a 7-inch smart display that responds to Google Assistant commands. It came out in 2018. The device was renamed the “Nest Hub” in 2019, and it’s only this first-generation device, not the second-generation Nest Hub or Nest Hub Max, that is getting Fuchsia. The Home Hub’s OS has always been an odd duck. When the device was released, Google was pitching a smart display hardware ecosystem to partners based on Android Things, a now-defunct Internet-of-things/kiosk OS. Instead of following the recommendations it gave to hardware partners, Google loaded the Home Hub with its in-house Google Cast Platform instead—and then undercut all its partners on.

Fuchsia has long been a secretive project. We first saw the OS as a pre-alpha smartphone UI that was ported to Android in 2017. In 2018, we got the OS running natively on a Pixelbook. After that, the Fuchsia team stopped doing its work in the open and stripped all UI work out of the public repository.
There’s no blog post or any fanfare at all to mark Fuchsia’s launch. Google’s I/O conference happened last week, and the company didn’t make a peep about Fuchsia there, either. Really, this ultra-quiet, invisible release is the most “Fuchsia” launch possible.

Fuchsia is something very rare in the world of tech: it’s a built-from-scratch operating system that isn’t based on Linux. Fuchsia uses a microkernel called “Zircon” that Google developed in house. Creating an operating system entirely from scratch and bringing it all the way to production sounds like a difficult task, but Google managed to do exactly that over the past six years. Fuchsia’s primary app-development language is Flutter, a cross-platform UI toolkit from Google. Flutter runs on Android, iOS, and the web, so writing Flutter apps today for existing platforms means you’re also writing Fuchsia apps for tomorrow.

The Nest Hub’s switch to Fuchsia is kind of interesting because of how invisible it should be. It will be the first test of this Fuchsia’s future-facing Flutter app support—the Google smart display interface is written in Flutter, so Google can take the existing interface, rip out all the Google Cast guts underneath, and plop the exact same interface code down on top of Fuchsia. Google watchers have long speculated that this was the plan all along. Rather than having a disruptive OS switch, Google could just get coders to write in Flutter and then it could seamlessly swap out the operating system.

So, unless we get lucky, don’t expect a dramatic hands-on post of Fuchsia running on the Nest Hub. It’s likely that there isn’t currently much to see or do with the new operating system, and that’s exactly how Google wants it. Fuchsia is more than just a smart-display operating system, though. An old Bloomberg report from 2018 has absolutely nailed the timing of Fuchsia so far, saying that Google wanted to first ship the OS on connected home devices “within three years”—the report turns three years old in July. The report also laid out the next steps for Fuchsia, including an ambitious expansion to smartphones and laptops by 2023.
Taking over the Nest Hub is one thing—no other team at Google really has a vested interest in the Google Cast OS (you could actually argue that the Cast OS is on the way out, as the latest Chromecast is switching to Android). Moving the OS onto smartphones and laptops is an entirely different thing, though, since the Fuchsia team would crash into the Android and Chrome OS divisions. Now you’re getting into politics.