Core Web Vitals Documentation Updated via @sejournal, @martinibuster

The official documentation for how Core Web Vitals are scored was recently updated with new insights into how Interaction to Next Paint (INP) scoring thresholds were chosen and offers a better understanding of Interaction To Next Paint.

Interaction to Next Paint (INP)

Interaction to Next Paint (INP) is a relatively new metric, officially becoming a Core Web Vitals in the Spring of 2024. It’s a metric of how long it takes a site to respond to interactions like clicks, taps, and when users press on a keyboard (actual or onscreen).

The official Web.dev documentation defines it:

“INP observes the latency of all interactions a user has made with the page, and reports a single value which all (or nearly all) interactions were beneath. A low INP means the page was consistently able to respond quickly to all—or the vast majority—of user interactions.”

INP measures the latency of all the interactions on the page, which is different than the now retired First Input Delay metric which only measured the delay of the first interaction. INP is considered a better measurement than INP because it provides a more accurate idea of the actual user experience is.

INP Core Web Vitals Score Thresholds

The main change to the documentation is to provide an explanation for the speed performance thresholds that show poor, needs improvement and good.

One of the choices made for deciding the scoring was how to handle scoring because it’s easier to achieve high INP scores on a desktop versus a mobile device because external factors like network speed and device capabilities heavily favor desktop environments.

But the user experience is not device dependent so rather that create different thresholds for different kinds of devices they settled on one metric that is based on mobile devices.

The new documentation explains:

“Mobile and desktop usage typically have very different characteristics as to device capabilities and network reliability. This heavily impacts the “achievability” criteria and so suggests we should consider separate thresholds for each.

However, users’ expectations of a good or poor experience is not dependent on device, even if the achievability criteria is. For this reason the Core Web Vitals recommended thresholds are not segregated by device and the same threshold is used for both. This also has the added benefit of making the thresholds simpler to understand.
Additionally, devices don’t always fit nicely into one category. Should this be based on device form factor, processing power, or network conditions? Having the same thresholds has the side benefit of avoiding that complexity.

The more constrained nature of mobile devices means that most of the thresholds are therefore set based on mobile achievability. They more likely represent mobile thresholds—rather than a true joint threshold across all device types. However, given that mobile is often the majority of traffic for most sites, this is less of a concern.”

These are scores Chrome settled on:

  • Scores of under 200 ms (milliseconds) were chosen to represent a “good” score.
  • Scores between 200 ms – 500 ms represent a “needs improvement” score.
  • Performance of over 500 ms represent a “poor” score.

Screenshot Of An Interaction To Next Paint Score

Interaction To Next Paint (INP) Core Web Vitals Score

Lower End Devices Were Considered

Chrome was focused on choosing achievable metrics. That’s why the thresholds for INP had to be realistic for lower end mobile devices because so many of them are used to access the Internet.

They explained:

“We also spent extra attention looking at achievability of passing INP for lower-end mobile devices, where those formed a high proportion of visits to sites. This further confirmed the suitability of a 200 ms threshold.

Taking into consideration the 100 ms threshold supported by research into the quality of experience and the achievability criteria, we conclude that 200 ms is a reasonable threshold for good experiences”

Most Popular Sites Influenced INP Thresholds

Another interesting insight in the new documentation is that achievability of the scores in the real world were another consideration for the INP scoring metrics, measured in milliseconds (ms). They examined the performance of the top 10,000 websites because they made up the vast majority of website visits in order to dial in the right threshold for poor scores.

What they discovered is that the top 10,000 websites struggled to achieve performance scores of 300 ms. The CrUX data that reports real-world user experience showed that 55% of visits to the most popular sites were at the 300 ms threshold. That meant that the Chrome team had to choose a higher millisecond score that was achieveable by the most popular sites.

The new documentation explains:

“When we look at the top 10,000 sites—which form the vast majority of internet browsing—we see a more complex picture emerge…

On mobile, a 300 ms “poor” threshold would classify the majority of popular sites as “poor” stretching our achievability criteria, while 500 ms fits better in the range of 10-30% of sites. It should also be noted that the 200 ms “good” threshold is also tougher for these sites, but with 23% of sites still passing this on mobile this still passes our 10% minimum pass rate criteria.

For this reason we conclude a 200 ms is a reasonable “good” threshold for most sites, and greater than 500 ms is a reasonable “poor” threshold.”

Barry Pollard, a Web Performance Developer Advocate on Google Chrome who is a co-author of the documentation, added a comment to a discussion on LinkedIn that offers more background information:

“We’ve made amazing strides on INP in the last year. Much more than we could have hoped for. But less than 200ms is going to be very tough on low-end mobile devices for some time. While high-end mobile devices are absolute power horses now, the low-end is not increasing at anywhere near that rate…”

A Deeper Understanding Of INP Scores

The new documentation offers a better understanding of how Chrome chooses achievable metrics and takes some of the mystery out of the relatively new INP Core Web Vital metric.

Read the updated documentation:

How the Core Web Vitals metrics thresholds were defined

Featured Image by Shutterstock/Vectorslab

This AI system makes human tutors better at teaching children math

The US has a major problem with education inequality. Children from low-income families are less likely to receive high-quality education, partly because poorer districts struggle to retain experienced teachers. 

Artificial intelligence could help, by improving the one-on-one tutoring sometimes used to supplement class instruction in these schools. With help from an AI tool, tutors could tap into more experienced teachers’ expertise during virtual tutoring sessions. 

Researchers from Stanford University developed an AI system calledTutor CoPilot on top of OpenAI’s GPT-4 and integrated it into a platform called FEV Tutor, which connects students with tutors virtually. Tutors and students type messages to one another through a chat interface, and a tutor who needs help explaining how and why a student went wrong can press a button to generate suggestions from Tutor CoPilot. 

The researchers created the model by training GPT-4 on a database of 700 real tutoring sessions in which experienced teachers worked on on one with first- to fifth-grade students on math lessons, identifying the students’ errors and then working with them to correct the errors in such a way that they learned to understand the broader concepts being taught. From this, the model generates responses that tutors can customize to help their online students.

“I’m really excited about the future of human-AI collaboration systems,” says Rose Wang, a PhD student at Stanford University who worked on the project, which was published on arXiv and has not yet been peer-reviewed “I think this technology is a huge enabler, but only if it’s designed well.”

The tool isn’t designed to actually teach the students math—instead, it offers tutors helpful advice on how to nudge students toward correct answers while encouraging deeper learning. 

For example, it can suggest that the tutor ask how the student came up with an answer, or propose questions that could point to a different way to solve a problem. 

To test its efficacy, the team examined the interactions of 900 tutors virtually teaching math to 1,787 students between five and 13 years old from historically underserved communities in the US South. Half the tutors had the option to activate Tutor CoPilot, while the other half did not. 

The students whose tutors had access to Tutor CoPilot were 4 percentage points more likely to pass their exit ticket—an assessment of whether a student has mastered a subject—than those whose tutors did not have access to it. (Pass rates were 66% and 62%, respectively.)

The tool works as well as it does because it’s being used to teach relatively basic mathematics, says Simon Frieder, a machine-learning researcher at the University of Oxford, who did not work on the project. “You couldn’t really do a study with much more advanced mathematics at this current point in time,” he says.

The team estimates that the tool could improve student learning at a cost of around $20 per tutor annually to the tutoring provider, which is significantly cheaper than the thousands of dollars it usually takes to train educators in person. 

It has the potential to improve the relationship between novice tutors and their students by training them to approach problems the way experienced teachers do, says Mina Lee, an assistant professor of computer science at the University of Chicago, who was not involved in the project.

“This work demonstrates that the tool actually does work in real settings,” she says. “We want to facilitate human connection, and this really highlights how AI can augment human-to-human interaction.”

As a next step, Wang and her colleagues are interested in exploring how well novice tutors remember the teaching methods imparted by Tutor CoPilot. This could help them gain a sense of how long the effects of these kinds of AI interventions might last. They also plan to try to work out which other school subjects or age groups could benefit from such an approach.

“There’s a lot of substantial ways in which the underlying technology can get better,” Wang says. “But we’re not deploying an AI technology willy-nilly without pre-validating it—we want to be sure we’re able to rigorously evaluate it before we actually send it out into the wild. For me, the worst fear is that we’re wasting the students’ time.”

Palmer Luckey on the Pentagon’s future of mixed reality

Palmer Luckey has, in some ways, come full circle. 

His first experience with virtual-reality headsets was as a teenage lab technician at a defense research center in Southern California, studying their potential to curb PTSD symptoms in veterans. He then built Oculus, sold it to Facebook for $2 billion, left Facebook after a highly public ousting, and founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at $14 billion.

Now Luckey is redirecting his energy again, to headsets for the military. In September, Anduril announced it would partner with Microsoft on the US Army’s Integrated Visual Augmentation System (IVAS), arguably the military’s largest effort to develop a headset for use on the battlefield. Luckey says the IVAS project is his top priority at Anduril.

“There is going to be a heads-up display on every soldier within a pretty short period of time,” he told MIT Technology Review in an interview last week on his work with the IVAS goggles. “The stuff that we’re building—it’s going to be a big part of that.”

Though few would bet against Luckey’s expertise in the realm of mixed reality, few observers share his optimism for the IVAS program. They view it, thus far, as an avalanche of failures. 

IVAS was first approved in 2018 as an effort to build state-of-the-art mixed-reality headsets for soldiers. In March 2021, Microsoft was awarded nearly $22 billion over 10 years to lead the project, but it quickly became mired in delays. Just a year later, a Pentagon audit criticized the program for not properly testing the goggles, saying its choices “could result in wasting up to $21.88 billion in taxpayer funds to field a system that soldiers may not want to use or use as intended.” The first two variants of the goggles—of which the army purchased 10,000 units—gave soldiers nausea, neck pain, and eye strain, according to internal documents obtained by Bloomberg. 

Such reports have left IVAS on a short leash with members of the Senate Armed Services Committee, which helps determine how much money should be spent on the program. In a subcommittee meeting in May, Senator Tom Cotton, an Arkansas Republican and ranking member, expressed frustration at IVAS’s slow pace and high costs, and in July the committee suggested a $200 million cut to the program. 

Meanwhile, Microsoft has for years been cutting investments into its HoloLens headset—the hardware on which the IVAS program is based—for lack of adoption. In June, Microsoft announced layoffs to its HoloLens teams, suggesting the project is now focused solely on serving the Department of Defense. The company received a serious blow in August, when reports revealed that the Army is considering reopening bidding for the contract to oust Microsoft entirely. 

This is the catastrophe that Luckey’s stepped into. Anduril’s contribution to the project will be Lattice, an AI-powered system that connects everything from drones to radar jammers to surveil, detect objects, and aid in decision-making. Lattice is increasingly becoming Anduril’s flagship offering. It’s a tool that allows soldiers to receive instantaneous information not only from Anduril’s hardware, but also from radars, vehicles, sensors, and other equipment not made by Anduril. Now it will be built into the IVAS goggles. “It’s not quite a hive mind, but it’s certainly a hive eye” is how Luckey described it to me. 

Palmer Luckey holding an autonomous drone interceptor
Anvil, seen here held by Luckey in Anduril’s Costa Mesa Headquarters, integrates with the Lattice OS and can navigate autonomously to intercept hostile drones.
PHILIP CHEUNG

Boosted by Lattice, the IVAS program aims to produce a headset that can help soldiers “rapidly identify potential threats and take decisive action” on the battlefield, according to the Army. If designed well, the device will automatically sort through countless pieces of information—drone locations, vehicles, intelligence—and flag the most important ones to the wearer in real time. 

Luckey defends the IVAS program’s bumps in the road as exactly what one should expect when developing mixed reality for defense. “None of these problems are anything that you would consider insurmountable,” he says. “It’s just a matter of if it’s going to be this year or a few years from now.” He adds that delaying a product is far better than releasing an inferior product, quoting Shigeru Miyamoto, the game director of Nintendo: “A delayed game is delayed only once, but a bad game is bad forever.”

He’s increasingly convinced that the military, not consumers, will be the most important testing ground for mixed-reality hardware: “You’re going to see an AR headset on every soldier, long before you see it on every civilian,” he says. In the consumer world, any headset company is competing with the ubiquity and ease of the smartphone, but he sees entirely different trade-offs in defense.

“The gains are so different when we talk about life-or-death scenarios. You don’t have to worry about things like ‘Oh, this is kind of dorky looking,’ or ‘Oh, you know, this is slightly heavier than I would prefer,’” he says. “Because the alternatives of, you know, getting killed or failing your mission are a lot less desirable.”

Those in charge of the IVAS program remain steadfast in the expectation that it will pay off with huge gains for those on the battlefield. “If it works,” James Rainey, commanding general of the Army Futures Command, told the Armed Services Committee in May, “it is a legitimate 10x upgrade to our most important formations.” That’s a big “if,” and one that currently depends on Microsoft’s ability to deliver. Luckey didn’t get specific when I asked if Anduril was positioning itself to bid to become IVAS’s primary contractor should the opportunity arise. 

If that happens, US troops may, willingly or not, become the most important test subjects for augmented- and virtual-reality technology as it is developed in the coming decades. The commercial sector doesn’t have thousands of individuals within a single institution who can test hardware in physically and mentally demanding situations and provide their feedback on how to improve it. 

That’s one of the ways selling to the defense sector is very different from selling to consumers, Luckey says: “You don’t actually have to convince every single soldier that they personally want to use it. You need to convince the people in charge of him, his commanding officer, and the people in charge of him that this is a thing that is worth wearing.” The iterations that eventually come from IVAS—if it keeps its funding—could signal what’s coming next for the commercial market. 

When I asked Luckey if there were lessons from Oculus he had to unlearn when working with the Department of Defense, he said there’s one: worrying about budgets. “I prided myself for years, you know—I’m the guy who’s figured out how to make VR accessible to the masses by being absolutely brutal at every part of the design process, trying to get costs down. That isn’t what the DOD wants,” he says. “They don’t want the cheapest headset in a vacuum. They want to save money, and generally, spending a bit more money on a headset that is more durable or that has better vision—and therefore allows you to complete a mission faster—is definitely worth the extra few hundred dollars.”

I asked if he’s impressed by the progress that’s been made during his eight-year hiatus from mixed reality. Since he left Facebook in 2017, Apple, Magic Leap, Meta, Snap, and a cascade of startups have been racing to move the technology from the fringe to the mainstream. Everything in mixed reality is about trade-offs, he says. Would you like more computing power, or a lighter and more comfortable headset? 

With more time at Meta, “I would have made different trade-offs in a way that I think would have led to greater adoption,” he says. “But of course, everyone thinks that.” While he’s impressed with the gains, “having been on the inside, I also feel like things could be moving faster.”

Years after leaving, Luckey remains noticeably annoyed by one specific decision he thinks Meta got wrong: not offloading the battery. Dwelling on technical details is unsurprising from someone who spent his formative years living in a trailer in his parents’ driveway posting in obscure forums and obsessing over goggle prototypes. He pontificated on the benefits of packing the heavy batteries and chips in removable pucks that the user could put in a pocket, rather than in the headset itself. Doing so makes the headset lighter and more comfortable. He says he was pushing Facebook to go that route before he was ousted, but when he left, it abandoned the idea. Apple chose to have an external battery for its Vision Pro, which Luckey praised. 

“Anyway,” he told me. “I’m still sore about it eight years later.”

Speaking of soreness, Luckey’s most public professional wound, his ouster from Facebook in 2017, was partially healed last month. The story—involving countless Twitter threads, doxxing, retractions and corrections to news articles, suppressed statements, and a significant segment in Blake Harris’s 2020 book The History of the Future—is difficult to boil down. But here’s the short version: A donation by Luckey to a pro-Trump group called Nimble America in late 2016 led to turmoil within Facebook after it was reported by the Daily Beast. That turmoil grew, especially after Ars Technica wrote that his donation was funding racist memes (the founders of Nimble America were involved in the subreddit r/TheDonald, but the organization itself was focused on creating pro-Trump billboards). Luckey left in March 2017, but Meta has never disclosed why. 

This April, Oculus’s former CTO John Carmack posted on X that he regretted not supporting Luckey more. Meta’s CTO, Andrew Bosworth, argued with Carmack, largely siding with Meta. In response, Luckey said, “You publicly told everyone my departure had nothing to do with politics, which is absolutely insane and obviously contradicted by reams of internal communications.” The two argued. In the X argument, Bosworth cautioned that there are “limits on what can be said here,” to which Luckey responded, “I am down to throw it all out there. We can make everything public and let people judge for themselves. Just say the word.” 

Six months later, Bosworth apologized to Luckey for the comments. Luckey responded, writing that although he is “infamously good at holding grudges,” neither Bosworth nor current leadership at Meta was involved in the incident. 

By now Luckey has spent years mulling over how much of his remaining anger is irrational or misplaced, but one thing is clear. He has a grudge left, but it’s against people behind the scenes—PR agents, lawyers, reporters—who, from his perspective, created a situation that forced him to accept and react to an account he found totally flawed. He’s angry about the steps Facebook took to keep him from communicating his side (Luckey has said he wrote versions of a statement at the time but that Facebook threatened further escalation if he posted it).

“What am I actually angry at? Am I angry that my life went in that direction? Absolutely,” he says.

“I have a lot more anger for the people who lied in a way that ruined my entire life and that saw my own company ripped out from under me that I’d spent my entire adult life building,” he says. “I’ve got plenty of anger left, but it’s not at Meta, the corporate entity. It’s not at Zuck. It’s not at Boz. Those are not the people who wronged me.”

While various subcommittees within the Senate and House deliberate how many millions to spend on IVAS each year, what is not in question is the Pentagon is investing to prepare for a potential conflict in the Pacific between China and Taiwan. The Pentagon requested nearly $10 billion for the Pacific Deterrence Initiative in its latest budget. The prospect of such a conflict is something Luckey considers often. 

He told the authors of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War that Anduril’s “entire internal road map” has been organized around the question “How do you deter China? Not just in Taiwan, but Taiwan and beyond?”

At this point, nothing about IVAS is geared specifically toward use in the South Pacific as opposed to Ukraine or anywhere else. The design is in early stages. According to transcripts of a Senate Armed Services Subcommittee meeting in May, the military was scheduled to receive the third iteration of IVAS goggles earlier this summer. If they were on schedule, they’re currently in testing. That version is likely to change dramatically before it approaches Luckey’s vision for the future of mixed-reality warfare, in which “you have a little bit of an AI guardian angel on your shoulder, helping you out and doing all the stuff that is easy to miss in the midst of battle.”

Palmer Luckey sitting on yellow metal staircase
Designs for IVAS will have to adapt amid a shifting landscape of global conflict.
PHILIP CHEUNG

But will soldiers ever trust such a “guardian angel”? If the goggles of the future rely on AI-powered software like Lattice to identify threats—say, an enemy drone ahead or an autonomous vehicle racing toward you—Anduril is making the promise that it can sort through the false positives, recognize threats with impeccable accuracy, and surface critical information when it counts most. 

Luckey says the real test is how the technology compares with the current abilities of humans. “In a lot of cases, it’s already better,” he says, referring to Lattice, as measured by Anduril’s internal tests (it has not released these, and they have not been assessed by any independent external experts). “People are fallible in ways that machines aren’t necessarily,” he adds.

Still, Luckey admits he does worry about the threats Lattice will miss.

“One of the things that really worries me is there’s going to be people who die because Lattice misunderstood something, or missed a threat to a soldier that it should have seen,” he says. “At the same time, I can recognize that it’s still doing far better than people are doing today.”

When Lattice makes a significant mistake, it’s unlikely the public will know. Asked about the balance between transparency and national security in disclosing these errors, Luckey said that Anduril’s customer, the Pentagon, will receive complete information about what went wrong. That’s in line with the Pentagon’s policies on responsible AI adoption, which require that AI-driven systems be “developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by their relevant defense personnel.” 

However, the policies promise nothing about disclosure to the public, a fact that’s led some progressive think tanks, like the Brennan Center for Justice, to call on federal agencies to modernize public transparency efforts for the age of AI. 

“It’s easy to say, Well, shouldn’t you be honest about this failure of your system to detect something?” Luckey says, regarding Anduril’s obligations. “Well, what if the failure was because the Chinese figured out a hole in the system and leveraged that to speed past our defenses of some military base? I’d say there’s not very much public good served in saying, ‘Attention, everyone—there is a way to get past all of the security on every US military base around the world.’ I would say that transparency would be the worst thing you could do.”

The Download: an interview with Palmer Luckey, and AI-assisted math tutors

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Palmer Luckey on the Pentagon’s future of mixed reality

Palmer Luckey has, in some ways, come full circle. 

His first experience with virtual-reality headsets was as a teenage lab technician at a defense research center in Southern California, studying their potential to curb PTSD symptoms in veterans. He then built Oculus, sold it to Facebook for $2 billion, left Facebook after a highly public ousting, and founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at $14 billion.

Now Luckey is redirecting his energy again, to headsets for the military. In September, Anduril announced it would partner with Microsoft on the US Army’s Integrated Visual Augmentation System (IVAS), arguably the military’s largest effort to develop a headset for use on the battlefield. Luckey says the IVAS project is his top priority at Anduril. 

He spoke to MIT Technology Review about his plans. Read the full interview.

—James O’Donnell 

This AI system makes human tutors better at teaching children math

The US has a major problem with education inequality. Children from low-income families are less likely to receive high-quality education, partly because poorer districts struggle to retain experienced teachers. 

Artificial intelligence could help. A new tool could improve the one-on-one tutoring sometimes used to supplement class instruction in these schools, by letting tutors tap into more experienced teachers’ expertise during virtual sessions. Here’s how it works

—Rhiannon Williams 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1  Google is developing an AI agent called Jarvis
It’ll be able to do entire tasks for you, like buying things or making bookings. (The Information $)
What are AI agents? (MIT Technology Review)

2 Far-right sheriffs are preparing to disrupt the election 
And the means they’re planning to use are getting more and more violent. (Wired $)
Election officials are receiving an unprecedented number of threats. (The Atlantic $)
Groups are coordinating online to spread lies about the election. (NBC

3 Check out the first images of the sun’s flares from a new NASA telescope
These storms are what’s behind the increased visibility of shimmering lights in our night skies recently. (NYT $)

4 Elon Musk seems to have briefly worked illegally in the US
Which makes his current obsession with borders look a tad hypocritical. (WP $)
Why is he backing Trump so enthusiastically? (Vox)

5 An AI transcription tool used in hospitals invents things no one said
OpenAI has said its Whisper tool shouldn’t be used in ‘high-risk domains’. But that’s exactly what’s happening. (AP)

6 China is restricting access to materials needed to make chips
It has a near-monopoly, so any squeeze on supply is likely to have an outsized impact. (NYT $)
What’s next in chips. (MIT Technology Review)

7 A Neuralink rival says its eye implant restored vision to blind people 
It’s an exciting findingbut still very early days for testing the technology. (Wired $)

8 Nuclear power is back in fashion
But whether building new reactors is the best way to rapidly cut emissions is debatable. (Nature)
+ Why artificial intelligence and clean energy need each other. (MIT Technology Review)

9 Is Boeing fixable? 
It’s been in chaos for the best part of five years, and the problems just keep piling up. (FT $)

10 People have a lot of love for Microsoft Excel 
It’s been around for 40 years, during which time it’s gathered a surprisingly devoted fanbase. (The Guardian)

Quote of the day

“Today’s win may not be parfait, but it’s still pretty sweet.”

—Meredith Rose, senior policy counsel for consumer advocacy group Public Knowledge, hails a US Copyright Office ruling which should make it much easier to fix McDonald’s McFlurry machines, Ars Technica reports.

 The big story

Longevity enthusiasts want to create their own independent state. They’re eyeing Rhode Island.

A high-angle drone shot of Lustica bay resort with forested mountains in the background

GETTY IMAGES

May 2023

—Jessica Hamzelou

Earlier this month, I traveled to Montenegro for a gathering of longevity enthusiasts. All the attendees were super friendly, and the sense of optimism was palpable. They’re all confident we’ll be able to find a way to slow or reverse aging—and they have a bold plan to speed up progress.

Around 780 of these people have created a “pop-up city” that hopes to circumvent the traditional process of clinical trials. They want to create an independent state where like-minded innovators can work together in an all-new jurisdiction that gives them free rein to self-experiment with unproven drugs. Welcome to Zuzalu. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ I learned a lovely new word recently: sonder.
+ Feeling less brave than you’d like to? This Maya Angelou poem is for you. 
+ You can use miso to boost the flavor of so many more things than you’d imagine. 
+ Such a tender moment captured in this photo of kids buying ice cream.

AI will add to the e-waste problem. Here’s what we can do about it.

Generative AI could account for up to 5 million metric tons of e-waste by 2030, according to a new study.

That’s a relatively small fraction of the current global total of over 60 million metric tons of e-waste each year. However, it’s still a significant part of a growing problem, experts warn. 

E-waste is the term to describe things like air conditioners, televisions, and personal electronic devices such as cell phones and laptops when they are thrown away. These devices often contain hazardous or toxic materials that can harm human health or the environment if they’re not disposed of properly. Besides those potential harms, when appliances like washing machines and high-performance computers wind up in the trash, the valuable metals inside the devices are also wasted—taken out of the supply chain instead of being recycled.

Depending on the adoption rate of generative AI, the technology could add 1.2 million to 5 million metric tons of e-waste in total by 2030, according to the study, published today in Nature Computational Science

“This increase would exacerbate the existing e-waste problem,” says Asaf Tzachor, a researcher at Reichman University in Israel and a co-author of the study, via email.

The study is novel in its attempts to quantify the effects of AI on e-waste, says Kees Baldé, a senior scientific specialist at the United Nations Institute for Training and Research and an author of the latest Global E-Waste Monitor, an annual report.

The primary contributor to e-waste from generative AI is high-performance computing hardware that’s used in data centers and server farms, including servers, GPUs, CPUs, memory modules, and storage devices. That equipment, like other e-waste, contains valuable metals like copper, gold, silver, aluminum, and rare earth elements, as well as hazardous materials such as lead, mercury, and chromium, Tzachor says.

One reason that AI companies generate so much waste is how quickly hardware technology is advancing. Computing devices typically have lifespans of two to five years, and they’re replaced frequently with the most up-to-date versions. 

While the e-waste problem goes far beyond AI, the rapidly growing technology represents an opportunity to take stock of how we deal with e-waste and lay the groundwork to address it. The good news is that there are strategies that can help reduce expected waste.

Expanding the lifespan of technologies by using equipment for longer is one of the most significant ways to cut down on e-waste, Tzachor says. Refurbishing and reusing components can also play a significant role, as can designing hardware in ways that makes it easier to recycle and upgrade. Implementing these strategies could reduce e-waste generation by up to 86% in a best-case scenario, the study projected. 

Only about 22% of e-waste is being formally collected and recycled today, according to the 2024 Global E-Waste Monitor. Much more is collected and recovered through informal systems, including in low- and lower-middle-income countries that don’t have established e-waste management infrastructure in place. Those informal systems can recover valuable metals but often don’t include safe disposal of hazardous materials, Baldé says.

Another major barrier to reducing AI-related e-waste is concerns about data security. Destroying equipment ensures information doesn’t leak out, while reusing or recycling equipment will require using other means to secure data. Ensuring that sensitive information is erased from hardware before recycling is critical, especially for companies handling confidential data, Tzachor says.

More policies will likely be needed to ensure that e-waste, including from AI, is recycled or disposed of properly. Recovering valuable metals (including iron, gold, and silver) can help make the economic case. However, e-waste recycling will likely still come with a price, since it’s costly to safely handle the hazardous materials often found inside the devices, Baldé says. 

“For companies and manufacturers, taking responsibility for the environmental and social impacts of their products is crucial,” Tzachor says. “This way, we can make sure that the technology we rely on doesn’t come at the expense of human and planetary health.”

4 Payment Processing Pitfalls to Avoid

Businesses have ever-increasing ways to accept payments. Options include traditional processors (FIS, Worldpay), payment facilitators (Stripe, Square), payment gateways (Payoneer, 2Checkout), and marketplaces (Etsy, eBay), all offering fast approvals and frictionless onboarding.

Once their merchant accounts are approved and funds are flowing, businesses typically focus on other priorities and think of payment processing only when something breaks. I spoke recently with industry pros who shared advice on preventing those breaks, citing four common pitfalls.

Misclassified Categories

Experts advised merchants not to think of account openings as one-and-done events but rather as fluid agreements with processors that adapt as markets fluctuate and models change.

Mike Eckler, an independent consultant and 20-year payments industry veteran with a leadership background at PayPal, Moneris, and other firms, advised merchants to carefully read contracts, especially clauses that pertain to merchant categories and restricted or forbidden sales.

Mike Eckler

Mike Eckler

“Your acquirer and other payment service providers will ask you to classify your company by assigning a merchant category code,” he said, explaining that card brands Visa and Mastercard assign these codes based on a business’s products and services. “If your acquirer or the card brands discover that you have misclassified your business, it could lead to penalties and possible termination.”

David True, founding member of PayGility Advisors, a fintech and payments consultancy, and president of industry trade association NYPAY, whose 30-year career includes senior roles at American Express, Mastercard, and other payments organizations, advised merchants to consider card brand requirements when applying for processing services.

“From a merchant’s perspective, the first consideration is avoiding scrutiny by adhering to card brand rules,” he said. “A processor or acquirer doesn’t have the final say on a merchant’s degree of risk or eligibility; these decisions are based on card brands.”

Unaligned Risk Appetites

True further noted that some agreements extend beyond card brands and processors to payment gateways, independent sales organizations (ISOs), and third-party vendors. “There are all kinds of relationships in the business,” he said. “If you’re an ISO, you must ensure that your acquiring bank will support a merchant category before you board accounts. If you’re a bank, you need the risk tolerance and back-office controls to support that category. If you’re a merchant, you need to align with service provider expectations and risk appetites.”

David True

David True

True recalled an ISO pitching a bank on a new merchant category, claiming the rewards would outweigh the risks. The bank agreed, he said, due to its longstanding relationship and trust in the ISO’s due diligence, customer verification, and underwriting processes.

Eckler agreed that relationships matter in payment processing but pointed out that some categories are relatively higher risk and more likely to be shut down by processors, card brands, or acquirers. These categories include gambling, dating and adult content, health products and supplements, credit repair services, and illegal or potentially illegal sites that traffic in weapons and counterfeit goods.

Hence merchants should avoid activities that could potentially damage card brand reputations, Eckler added, stating, “Card brands protect their reputation carefully and will punish or ban merchants that tarnish it.”

Excessive Chargebacks

Proactively monitor customer inquiries, disputes, and refunds, experts advised, to keep chargeback ratios below the standard industry rate of 1% — one chargeback for every 100 transactions. Eckler suggested merchants consider providers that screen and score transactions before acceptance.

“Many services are provided as a value-add while others charge a fee,” he said, advising merchants to weigh additional expense against the cost of handling chargebacks. “By the time you factor in chargeback fees, potentially lost goods, and time and effort spent investigating and fighting chargebacks, it may be worthwhile to pay a small fee to screen for fraudulent transactions.”

Eckler stated that larger merchants may consider other services such as rapid dispute resolution, Visa’s Order Insight, and others, adding that contrary to popular belief, chargebacks are not always bad. “A small number of chargebacks for a high-volume merchant usually means that the merchant is taking a reasonable amount of risk to win business from new markets.”

True suggested reviewing chargeback reason codes for clues about customer trends and behavior patterns. Visa recently rolled out a program that assesses customer buying patterns and identifies out-of-pattern behaviors, he said. Merchants can leverage this capability.

“Think about what triggers your chargebacks and whether the causes are service- or product-driven,” he said. “If you’re new to the business, research chargeback issues that others have in your space. Most importantly, post clear return policies on your website and invite an independent and objective source to review these policies and terms and conditions to confirm they are clear and understandable.”

Subpar Security

Fraud is ever-present in ecommerce, but experts noted that PCI DSS compliance and tech-driven tools can protect companies, customers, and infrastructures from known and emerging threats. Eckler sees fraud as a cost of doing business and advised owners to train employees to recognize phishing [fraudulent communications] and social engineering [false representatives to obtain info] that could lead to a ransomware attack.

True stated that fraud never sleeps, so merchants need always-on, always-connected fraud prevention solutions. “Studies have shown that first-party fraud [customers deliberately providing false info] and friendly fraud [dishonestly disputing a purchase] account for 60 to 70% of all chargebacks,” he said. “Shop for a vendor with next-gen technologies to continuously monitor, detect, and remediate fraud.”

True acknowledged that most ecommerce businesses don’t want to encumber customers with added security features at checkout but urged merchants to weigh the risk of a few lost sales from those features against the costs of a security breach.

Ecommerce websites must accurately reflect their brands and offerings, True stated, and businesses must advise processors of any plans to change a website, product category, or campaigns that could drive up transaction volumes.

“If you’re planning to change your business, advise your acquirer so they can pass it upstream and provide an explanation. Don’t rely on acquirers’ salespeople to pass this message, because they may say ‘that’s great’ without seeing the potential red flag.”

Teamsters Target Amazon Delivery Drivers

They drive large trucks emblazoned with “Amazon” and wear Amazon uniforms. But are they Amazon employees?

That’s the question central to a dispute over the International Brotherhood of Teamsters’ attempts to organize the drivers who deliver Amazon packages in California, Illinois, and New York.

Last year, 84 Amazon delivery drivers from Palmdale, California, became the first U.S. group to join the Teamsters. Since then, drivers in Skokie, Illinois, the New York City borough of Queens, and Victorville, California, have done the same.

But here’s the thing: Amazon doesn’t sign the drivers’ paychecks. They work for Amazon’s massive Delivery Service Partner (DSP) network. The DSPs are independent companies started under an Amazon program that allows aspiring business owners to set up shop for as little as $10,000. The DSPs employ drivers who operate the DSPs’ trucks, which are generally leased through third-party companies Amazon approves.

Because of that, Amazon says DSP drivers are not its employees. The union alleges that Amazon is a co-employer of the drivers and accuses the ecommerce behemoth of using its DSP program to dodge responsibility. So far, regional National Labor Relations Board officials have sided with the Teamsters. NLRB’s Region 31 issued a complaint against Amazon dated September 30.

However, the situation is far from settled, as the issue remains pending through the NLRB process and the courts. An NLRB administrative law judge set a hearing on the regional complaint for March 25, 2025.

Delivery man carrying an Amazon box

Most Amazon delivery drivers work for independent companies within its Delivery Service Partner network.

Bad for Amazon Sellers?

Should the union be victorious, Amazon would almost certainly pass along increased costs by raising fees it charges platform sellers, says Phil Masiello, CEO of CrunchGrowth Revenue Acceleration Agency and a longtime Amazon seller and founder of multiple ecommerce companies. But, he added, Amazon won’t easily give in.

“The long and short of it is you can look at what happened in New York and other areas. Amazon will absolutely fight it,” Masiello says.

Teamsters Position

A Teamsters spokesperson did not respond to requests for comment. However, in news releases, the union accuses Amazon of using the DSP structure to evade its obligations while “exercising total control over the wages, workplace conditions, and safety standards of the drivers.”

In a release issued on October 2, Sean M. O’Brien, Teamsters general president, said the NLRB complaint “brings us one step closer to getting Amazon workers the pay, working conditions, and contracts they deserve. Amazon has no choice but to meet us at the negotiating table.”

The union and its allies have applied political pressure on Amazon.

On October 18, 133 U.S. House of Representatives members, led by the Congressional Labor Caucus, issued a letter asking Amazon CEO Andy Jassy to provide information about “unlawful violations of the National Labor Relations Act.”

“We are deeply troubled by ongoing reports that Amazon may be unlawfully coercing, intimidating, and retaliating against workers involved in union organizing activity,” the letter said.

The letter then asks Jassy to respond to six questions related to union organizing efforts by direct Amazon workers and those employed by DSP operators.

What Amazon Says

In an email to Practical Ecommerce, Amazon spokesperson Eileen Hards said the NLRB complaint “makes clear that the Teamsters have been misrepresenting the facts here for over 15 months, which is why the NLRB has not included most of their larger allegations.”

“As we’ve said all along, there is no merit to any of their claims. We look forward to showing that, as the legal process continues, and expect the few remaining allegations will be dismissed as well,” Hards said, adding that Amazon is not inherently opposed to unionization.

“Our employees have the choice of whether or not to join a union. They always have,” she said in the email. “We favor opportunities for each person to be respected and valued as an individual and to have their unique voice heard by working directly with our team. The fact is, Amazon already offers what many unions are requesting: competitive pay, health benefits on day one, and opportunities for career growth. We look forward to working directly with our team to continue making Amazon a great place to work.”

Amazon says its DSP network consists of 4,400 business owners who employ 390,000 drivers and generate a combined $58 billion in revenue — roughly the same as Delta Airlines’ fiscal 2023 operating revenue.

Amazon’s turbulent relationship with the Teamsters goes beyond organizing efforts by DSP drivers. In 2022, Amazon workers at the JFK8 Fulfillment Center on Staten Island, New York, formed the Amazon Labor Union (ALU). The ALU affiliated with the Teamsters earlier this year, becoming the ALU-IBT.

Founded in 1903, the Teamsters represent 1.3 million people in the U.S., Canada, and Puerto Rico.

Google Proposes New Shipping Structured Data via @sejournal, @martinibuster

Google published a proposal in the Schema.org Project GitHub instance that proposes proposing an update at Schema.org to expand the shopping structured data so that merchants can provide more shipping information that will likely show up in Google Search and other systems.

Shipping Schema.org Structured Data

The proposed new structured data Type can be used by merchants to provide more shipping details. It also suggests adding the flexibility of using a sitewide shipping structured data that can then be nested with the Organization structured data, thereby avoiding having to repeat the same information thousands of times across a website.

The initial proposal states:

“This is a proposal from Google to support a richer representation of shipping details (such as delivery cost and speed) and make this kind of data explicit. If adopted by schema.org and publishers, we consider it likely that search experiences and other consuming systems could be improved by making use of such markup.

This change introduces a new type, ShippingService, that groups shipping constraints (delivery locations, time, weight and size limits and shipping rate). Redundant fields from ShippingRateSettings are therefore been deprecated in this proposal.

As a consequence, the following changes are also proposed:

some fields in OfferShippingDetails have moved to ShippingService;
ShippingRateSettings has more ways to specify the shipping rate, proportional to the order price or shipping weight;
linking from the Offer should now be done with standard Semantic Web URI linking.”

The proposal is open for discussion and many stakeholders are offering opinions on how the updated and new structured data would work.

For example, one person involved in the discussion asked how a sitewide structured data type placed in the Organization level could be superseded by individual products had different information and someone else provided an answer.

A participant in the GitHub discussion named Tiggerito posted:

“I re-read the document and what you said makes sense. The Organization is a place where shared ShippingConditions can be stored. But the ShippingDetails is always at the ProductGroup or Product level.

This is how I currently deal with Shipping Details:

In the back end the owner can define a global set of shipping details. Each contains the fields Google currently support, like location and times, but not specifics about dimensions. Each entry also has conditions for what product the entry can apply to. This can include a price range and a weight range.

When I’m generating the structured data for a page I include the entries where the product matches the conditions.

This change looks like it will let me change from filtering out the conditions on the server, to including them in the Structured Data on the product page.

Then the consumers of the data can calculate which ShippingConditions are a match and therefore what rates are available when ordering a specific number of the product. Currently, you can only provide prices for shipping one.

The split also means it’s easier to provide product specific information as well as shared shipping information without the need for repetition.

Your example in the document at the end for using Organization. It looks like you are referencing ShippingConditions for a product that are on a shipping page. This cross-referencing between pages could greatly reduce the bloat this has on the product page, if supported by Google.”

The Googler responded to Tiggerito:

“@Tiggerito

The Organization is a place where shared ShippingConditions can be stored. But the ShippingDetails is always at the ProductGroup or Product level.

Indeed, and this is already the case. This change also separates the two meanings of eg. width, height, weight as description of the product (in ShippingDetails) and as constraints in the ShippingConditions where they can be expressed as a range (QuantitativeValue has min and max).

In the back end the owner can define a global set of shipping details. Each contains the fields Google currently support, like location and times, but not specifics about dimensions. Each entry also has conditions for what product the entry can apply to. This can include a price range and a weight range.

When I’m generating the structured data for a page I include the entries where the product matches the conditions.

This change looks like it will let me change from filtering out the conditions on the server, to including them in the Structured Data on the product page.

Then the consumers of the data can calculate which ShippingConditions are a match and therefore what rates are available when ordering a specific number of the product. Currently, you can only provide prices for shipping one.

Some shipping constraints are not available at the time the product is listed or even rendered on a page (eg. shipping destination, number of items, wanted delivery speed or customer tier if the user is not logged in). The ShippingDetails attached to a product should contain information about the product itself only, the rest gets moved to the new ShippingConditions in this proposal.
Note that schema.org does not specify a cardinality, so that we could specify multiple ShippingConditions links so that the appropriate one gets selected at the consumer side.

The split also means it’s easier to provide product specific information as well as shared shipping information without the need for repetition.

Your example in the document at the end for using Organization. It looks like you are referencing ShippingConditions for a product that are on a shipping page. This cross-referencing between pages could greatly reduce the bloat this has on the product page, if supported by Google.

Indeed. This is where we are trying to get at.”

Discussion On LinkedIn

LinkedIn member Irina Tuduce (LinkedIn profile), software engineer at Google Shopping, initiated a discussion that received multiple responses that demonstrating interest for the proposal.

Andrea Volpini (LinkedIn profile), CEO and Co-founder of WordLift, expressed his enthusiasm for the proposal in his response:

“Like this Irina Tuduce it would streamline the modeling of delivery speed, locations, and cost for large organizations

Indeed. This is where we are trying to get at.”

Another member, Ilana Davis (LinkedIn profile), developer of the JSON-LD for SEO Shopify App, posted:

“I already gave my feedback on the naming conventions to schema.org which they implemented. My concern for Google is how exactly merchants will get this data into the markup. It’s nearly impossible to get exact shipping rates in the SD if they fluctuate. Merchants can enter a flat rate that is approximate, but they often wonder if that’s acceptable. Are there consequences to them if the shipping rates are an approximation (e.g. a price mismatch in GMC disapproves a product)?”

Inside Look At Development Of New Structured Data

The ongoing LinkedIn discussion offers a peek at how stakeholders in the new structured data feel about the proposal. The official Schema.org GitHub discussion not only provides a view of  how the proposal is progressing, it offers stakeholders an opportunity to provide feedback for shaping what it will ultimately look like.

There is also a public Google Doc titled, Shipping Details Schema Change Proposal, that has a full description of the proposal.

Featured Image by Shutterstock/Stokkete

Google Expands AI Overviews In Search To Over 100 Countries via @sejournal, @MattGSouthern

Google expands AI-powered search summaries globally, now reaching over 100 countries with support for six different languages.

  • Google’s AI Overviews is expanding from US-only to over 100 countries, reaching 1 billion monthly users.
  • The feature now supports six languages and includes new ways to display website links.
  • Google has started showing ads in AI Overviews for US mobile users.