OpenAI’s latest product lets you vibe code science

OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers.

The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science.

Kevin Weil, head of OpenAI for Science, pushes that analogy himself. “I think 2026 will be for AI and science what 2025 was for AI in software engineering,” he said at a press briefing yesterday. “We’re starting to see that same kind of inflection.”

OpenAI claims that around 1.3 million scientists around the world submit more than 8 million queries a week to ChatGPT on advanced topics in science and math. “That tells us that AI is moving from curiosity to core workflow for scientists,” Weil said.

Prism is a response to that user behavior. It can also be seen as a bid to lock in more scientists to OpenAI’s products in a marketplace full of rival chatbots.

“I mostly use GPT-5 for writing code,” says Roland Dunbrack, a professor of biology at the Fox Chase Cancer Center in Philadelphia, who is not connected to OpenAI. “Occasionally, I ask LLMs a scientific question, basically hoping it can find information in the literature faster than I can. It used to hallucinate references but does not seem to do that very much anymore.”

Nikita Zhivotovskiy, a statistician at the University of California, Berkeley, says GPT-5 has already become an important tool in his work. “It sometimes helps polish the text of papers, catching mathematical typos or bugs, and provides generally useful feedback,” he says. “It is extremely helpful for quick summarization of research articles, making interaction with the scientific literature smoother.”

By combining a chatbot with an everyday piece of software, Prism follows a trend set by products such as OpenAI’s Atlas, which embeds ChatGPT in a web browser, as well as LLM-powered office tools from firms such as Microsoft and Google DeepMind.

Prism incorporates GPT-5.2, the company’s best model yet for mathematical and scientific problem-solving, into an editor for writing documents in LaTeX, a common coding language that scientists use for formatting scientific papers.

A ChatGPT chat box sits at the bottom of the screen, below a view of the article being written. Scientists can call on ChatGPT for anything they want. It can help them draft the text, summarize related articles, manage their citations, turn photos of whiteboard scribbles into equations or diagrams, or talk through hypotheses or mathematical proofs.

It’s clear that Prism could be a huge time saver. It’s also clear that a lot of people may be disappointed, especially after weeks of high-profile social media chatter from researchers at the firm about how good GPT-5 is at solving math problems. Science is drowning in AI slop: Won’t this just make it worse? Where is OpenAI’s fully automated AI scientist? And when will GPT-5 make a stunning new discovery?

That’s not the mission, says Weil. He would love to see GPT-5 make a discovery. But he doesn’t think that’s what will have the biggest impact on science, at least not in the near term.

“I think more powerfully—and with 100% probability—there’s going to be 10,000 advances in science that maybe wouldn’t have happened or wouldn’t have happened as quickly, and AI will have been a contributor to that,” Weil told MIT Technology Review in an exclusive interview this week. “It won’t be this shining beacon—it will just be an incremental, compounding acceleration.”

The first human test of a rejuvenation method will begin “shortly” 

When Elon Musk was at Davos last week, an interviewer asked him if he thought aging could be reversed. Musk said he hasn’t put much time into the problem but suspects it is “very solvable” and that when scientists discover why we age, it’s going to be something “obvious.”

Not long after, the Harvard professor and life-extension evangelist David Sinclair jumped into the conversation on X to strongly agree with the world’s richest man. “Aging has a relatively simple explanation and is apparently reversible,” wrote Sinclair. “Clinical Trials begin shortly.”

“ER-100?” Musk asked.

“Yes” replied Sinclair.

ER-100 turns out to be the code name of a treatment created by Life Biosciences, a small Boston startup that Sinclair cofounded and which he confirmed today has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers. 

The company plans to try to treat eye disease with a radical rejuvenation concept called “reprogramming” that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech. 

The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off.  

“Reprogramming is like the AI of the bio world. It’s the thing everyone is funding,” says Karl Pfleger, an investor who backs a smaller UK startup, Shift Bioscience. He says Sinclair’s company has recently been seeking additional funds to keep advancing its treatment.

Reprogramming is so powerful that it sometimes creates risks, even causing cancer in lab animals, but the version of the technique being advanced by Life Biosciences passed initial safety tests in animals.

But it’s still very complex. The trial will initially test the treatment on about a dozen patients with glaucoma, a condition where high pressure inside the eye damages the optic nerve. In the tests, viruses carrying three powerful reprogramming genes will be injected into one eye of each patient, according to a description of the study first posted in December. 

To help make sure the process doesn’t go too far, the reprogramming genes will be under the control of a special genetic switch that turns them on only while the patients take a low dose of the antibiotic doxycycline. Initially, they will take the antibiotic for about two months while the effects are monitored. 

Executives at the company have said for months that a trial could begin this year, sometimes characterizing it as a starting bell for a new era of age reversal. “It’s an incredibly big deal for us as an industry,” Michael Ringel, chief operating officer at Life Biosciences, said at an event this fall. “It’ll be the first time in human history, in the millennia of human history, of looking for something that rejuvenates … So watch this space.”

The technology is based on the Nobel Prize–winning discovery, 20 years ago, that introducing a few potent genes into a cell will cause it to turn back into a stem cell, just like those found in an early embryo that develop into the different specialized cell types. These genes, known as Yamanaka factors, have been likened to a “factory reset” button for cells. 

But they’re dangerous, too. When turned on in a living animal, they can cause an eruption of tumors.

That is what led scientists to a new idea, termed “partial” or “transient” reprogramming. The idea is to limit exposure to the potent genes—or use only a subset of them—in the hope of making cells act younger without giving them complete amnesia about what their role in the body is.

In 2020, Sinclair claimed that such partial reprogramming could restore vision to mice after their optic nerves were smashed, saying there was even evidence that the nerves regrew. His report appeared on the cover of the influential journal Nature alongside the headline “Turning Back Time.”

Not all scientists agree that reprogramming really counts as age reversal. But Sinclair has doubled down. He’s been advancing the theory that the gradual loss of correct epigenetic information in our cells is, in fact, the ultimate cause of aging—just the kind of root cause that Musk was alluding to.

“Elon does seem to be paying attention to the field and [is] seemingly in sync with [my theory],” Sinclair said in an email.

Reprogramming isn’t the first longevity fix championed by Sinclair, who’s written best-selling books and commands stratospheric fees on the longevity lecture circuit. Previously, he touted the longevity benefits of molecules called sirtuins as well as resveratrol, a molecule found in red wine. But some critics say he greatly exaggerates scientific progress, pushback that culminated in a 2024 Wall Street Journal story that dubbed him a “reverse-aging guru” whose companies “have not panned out.” 

Life Biosciences has been among those struggling companies. Initially formed in 2017, it at first had a strategy of launching subsidiaries, each intended to pursue one aspect of the aging problem. But after these made limited progress, in 2021 it hired a new CEO, Jerry McLaughlin, who has refocused its efforts  on Sinclair’s mouse vision results and the push toward a human trial. 

The company has discussed the possibility of reprogramming other organs, including the brain. And Ringel, like Sinclair, entertains the idea that someday even whole-body rejuvenation might be feasible. But for now, it’s better to think of the study as a proof of concept that’s still far from a fountain of youth. “The optimistic case is this solves some blindness for certain people and catalyzes work in other indications,” says Pfleger, the investor. “It’s not like your doctor will be writing a prescription for a pill that will rejuvenate you.”

Life’s treatment also relies on an antibiotic switching mechanism that, while often used in lab animals, hasn’t been tried in humans before. Since the switch is built from gene components taken from E. coli and the herpes virus, it’s possible that it could cause an immune reaction in humans, scientists say. 

“I was always thinking that for widespread use you might need a different system,” says Noah Davidsohn, who helped Sinclair implement the technique and is now chief scientist at a different company, Rejuvenate Bio. And Life’s choice of reprogramming factors—it’s picked three, which go by the acronym OSK—may also be risky. They are expected to turn on hundreds of other genes, and in some circumstances the combination can cause cells to revert to a very primitive, stem-cell-like state.

Other companies studying reprogramming say their focus is on researching which genes to use, in order to achieve time reversal without unwanted side effects. New Limit, which has been carrying out an extensive search for such genes, says it won’t be ready for a human study for two years. At Shift, experiments on animals are only beginning now.

“Are their factors the best version of rejuvenation? We don’t think they are. I think they are working with what they’ve got,” Daniel Ives, the CEO of Shift, says of Life Biosciences. “But I think they’re way ahead of anybody else in terms of getting into humans. They have found a route forward in the eye, which is a nice self-contained system. If it goes wrong, you’ve still got one left.”

How Ecommerce Succeeds in Africa

Most global ecommerce businesses outsource customer deliveries. The process depends on standardized addresses, reliable couriers, predictable delivery windows, and successful online checkout.

Yet many African markets lack these pillars. This disconnect is apparent in the first fulfillment step.

Logistics in Africa

Informal, landmark addresses

Automated routing software is ineffective when a driver relies on directions like “turn left at the blue gate after the mango tree.” A driver who makes 100 drops in New York may only complete 20 in Lagos or Nairobi because of the need for multiple phone calls to locate the customer.

This inefficiency inflates the cost per delivery by making it unfeasible to ship low-value products (such as a $5 t-shirt) without charging a delivery fee that equals or exceeds the item’s value.

Consumer skepticism

Delivery mistakes and failures are routine, eroding consumer trust. The problem is illustrated by the “What I ordered vs. what I got” trend, a viral meme originating in Nigeria, where consumers share photos of inferior goods.

The result is that many shoppers in Africa refuse to prepay. They demand cash on delivery and insist on inspecting the package at the doorstep before paying.

If they reject an item (due to poor quality or simple preference), merchants must pay for the return trip, doubling the logistics cost for zero revenue.

Two photos of shoes, purporting to show the difference in the online image versus what actually arrived.

In Nigeria, consumers share “what I ordered vs. what I got” photos. This example is from TikTok.

Infrastructure gaps

Adding drivers or warehouses does not automatically reduce unit costs. Poor roads, limited city-to-city transport, and port congestion persist. The asset-heavy approach of owning trucks and distribution centers often becomes financially unsustainable.

 Third-party couriers inherit these flaws

Merchants hoping to outsource these bottlenecks find that third-party logistics providers hit the same reality. The market limits a driver’s efficiency. Even if a courier has a flawless local network, delays in cargo clearance or urban gridlock often cascade downstream.

Local solutions

Local players are rewriting the rules by investing in systems that function effectively regardless of the environment. These include:

Human agent networks, which decentralize and delegate the “last mile” to locals. The local agent knows the neighborhood (solving the address problem), and the customer knows the agent (removing mistrust).

Jumia, Africa’s dominant marketplace, recently pivoted to this model with its JForce program that recruited over 30,000 localized agents in rural areas and smaller cities.

Informal fleets. Another emerging solution is building software layers that coordinate the millions of motorcycles and tuk-tuks (three-wheeled vehicles) on the road. This avoids the costs of fleet ownership while using vehicles better suited for navigating traffic.

In Lagos, for example, Kwik, an on-demand courier, deploys independent motorbike riders who can weave through traffic and gridlock that would trap a delivery van.

Similarly, Loop in South Africa develops software that dynamically adjusts routes for third-party fleets based on real-time traffic.

Photo of a Kwik courier on a motorcycle

Kwik deploys motorbike riders in Lagos, Nigeria, who can weave through traffic and gridlock. Photo: Kwik.

Deliver in bulk to intermediaries. Delivering bulk goods to known, informal retailers rather than individuals allows couriers to drop 50 items at one location (a shop) rather than making 50 trips to customers’ houses.

Anticipate failures. Implementing “pre-failure” checks and contingency tools for drivers can prevent minor friction points from escalating to failed deliveries.

For example:

  • “Cash floats” protect cash-on-delivery revenue. Delivery provider Glovo mandates that drivers carry pre-counted small bills, preventing failed deliveries from the inability to provide change.
  • Verify first. Loop uses automated WhatsApp flows to contact the customer before the driver leaves the hub. If the customer does not confirm availability, the system flags the order to prevent a wasted trip.

The new playbook

Consumers in Africa are concentrated and accessible. The Big Four markets of Nigeria, Egypt, South Africa, and Kenya command nearly 70% of startup capital.

Yet capital alone cannot fix the ‘trust deficit’ or pave the roads. Ecommerce winners in Africa adapt to hyperlocal challenges for profitable selling.

Google AI Overviews Now Powered By Gemini 3 via @sejournal, @MattGSouthern

Google is making Gemini 3 the default model for AI Overviews in markets where the feature is available and adding a direct path into AI Mode conversations.

The updates, shared in a Google blog post, bring Gemini 3’s reasoning capabilities to AI Overviews. Google says the feature now reaches over one billion users.

What’s New

Gemini 3 For AI Overviews

The Gemini 3 upgrade brings the same reasoning capabilities to AI Overviews that previously powered AI Mode.

Robby Stein, VP of Product for Google Search, wrote:

“We’re rolling out Gemini 3 as the default model for AI Overviews globally, so even more people will be able to access best-in-class AI responses, directly in the results page for questions where it’s helpful.”

Gemini 3 launched in November, and Google shipped it to AI Mode on release day. This expands Gemini 3 from AI Mode into AI Overviews as the default.

AI Overview To AI Mode Transition

You can now ask a follow-up question right from an AI Overview and continue into AI Mode. The context from the original response carries into the conversation, so you don’t start over.

Stein described the thinking behind the change:

“People come to Search for an incredibly wide range of questions – sometimes to find information quickly, like a sports score or the weather, where a simple result is all you need. But for complex questions or tasks where you need to explore a topic deeply, you should be able to seamlessly tap into a powerful conversational AI experience.”

He called the result “one fluid experience with prominent links to continue exploring.”

An earlier test of this flow ran globally on mobile back in December.

In testing, Google found people prefer this kind of natural flow into conversation. The company also found that keeping AI Overview context in follow-ups makes Search more helpful.

Why This Matters

The pattern has held since AI Overviews launched. Each update makes it easier to stay within AI-powered responses.

When Gemini 3 arrived in AI Mode, it brought deeper query fan-out and dynamic response layouts. AI Overviews running on the same model could produce different citation patterns.

That makes today’s update an important one to monitor. Model changes can affect which pages get cited and how responses are structured.

Looking Ahead

Google says the updates are rolling out starting today, though availability may vary by market.

Google previously indicated plans to add automatic model selection that routes complex questions to Gemini 3 while using faster models for simpler tasks. Whether that affects AI Overviews beyond today’s default model change isn’t specified.


Featured Image: Darshika Maduranga/Shutterstock

How Do You Compete In Agentic Commerce? via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Agentic commerce transforms organic search from a source of cheap traffic into the mandatory gatekeeper of AI verification. Marketing arbitrage dies; product truth wins.

Image Credit: Kevin Indig

This week, we’re covering:

  • Why agentic commerce filters out marketing-first brands and rewards granular product data.
  • How ChatGPT, Copilot, and Google’s protocols reshape merchant economics and customer relationships.
  • Which feeds to optimize, which protocols to prioritize, and the implementation sequence that matters.
Image Credit: Kevin Indig

Agentic commerce acts as a “great filter,” so to speak, for marketing arbitrage, transforming organic search from a source of cheap traffic into the mandatory gatekeeper of AI verification.

The signal is already visible in the noise. During the 2025 holiday season, AI agents powered 20% of retail sales. Even allowing for loose definitions, the era of agentic commerce has arrived.

All major LLMs now offer direct checkout and new commerce protocols:

  1. ChatGPT has Instant Checkout with Shopify and Etsy, and ACP (Agentic Commerce Protocol).
  2. Microsoft Copilot uses ACP and offers Copilot Checkout with PayPal, Shopify, and Stripe.
  3. Google has embedded checkout in AI Mode and Gemini via its Universal Commerce Protocol (UCP).

The infrastructure question is settled, but the strategic question remains: How do you compete when users don’t need to click through to websites to buy?

1. Agentic Commerce Has A Hole In The Middle

The phrasing “agentic commerce” sets the wrong expectation. Autonomous purchasing, where you give an agent a credit card and monthly allowance to buy on your behalf, is not becoming a reality in the near future.

  • High-priced purchases like plane tickets or cars are too risky to delegate. You have idiosyncratic preferences (airline seat rules, car features) that no agent can reliably model.
  • Low-priced purchases like toilet paper or laundry detergent already have automation via subscription services (Instacart recurring orders, Subscribe & Save). An agent adds no incremental value.
  • The middle ground is smaller than the hype suggests. If high-priced resists delegation and low-priced is already “automated,” where does autonomous purchasing actually generate value?

“Conversational commerce” is a better frame. Instead of 100% automating the act of buying, LLMs compress the funnel by offering far superior research to classic search engines and showing products in the user interface.

  • Models read expert reviews, product specs, ingredient lists, and actual user feedback rather than ranking by keyword bids and conversion history.
  • The value lies in collapsing 14 clicks (Amazon’s disclosed average before purchase) into one or two.

2. Protocols Make Ecommerce “Headless”

The new commerce protocols allow AI agents to directly plug into the backend of your business, instead of crawling your site to show them in a list of search results. Protocols make commerce “headless” and decouple the front from the back-end:

  • Websites become less important as destinations and more important as databases.
  • The game shifts from optimizing landing page design for human eyes to optimizing data feeds for machine ingestion.
  • If your shipping speed, inventory status, or return policy isn’t accessible via API, you are invisible to the agent.

The shift from crawling to protocols collapses the legacy 14-click funnel (search, browse, click, checkout) into just two interactions: (1) the model parses intent by matching expert reviews against real-time inventory, and (2) the user executes a single click to buy using stored credentials.

Image Credit: Kevin Indig

While both protocols, ACP and UCP, enable the same user experience, they offer vastly different terms for the merchant.

OpenAI’s ACP (Agentic Commerce Protocol)

  • The Vision: The “Walled Garden.” OpenAI aims to handle the entire transaction within the chat interface, treating merchants effectively as suppliers.
  • The Trade-off: Efficiency vs. LTV. You gain access to 700 million weekly users, but you lose the direct customer relationship. Because OpenAI currently restricts passing customer emails for marketing, you lose the ability to remarket – effectively killing the 15-20% of Lifetime Value (LTV) that typically comes from post-purchase email flows.

Google’s UCP (Universal Commerce Protocol)

  • The Vision: The “Distributed Layer.” Google extends its Shopping Graph into a transactional layer that sits on top of Search, Lens, and Gemini.
  • The Trade-off: Ownership vs. Competition. Unlike ACP, Google allows merchants to retain the full customer lifecycle, including email rights and loyalty data. The cost is significantly higher competition intensity: Instead of fighting for 10 blue links, you are fighting for one of three “slots” in an AI Overview, making the margin for error in your product data effectively zero.

3. Conversational Commerce Disrupts The Whole Ecosystem

The shift from search to conversation creates a distinct set of winners, losers, and strategic dilemmas.

Buyers get a dramatically better user experience.

  • Discovery: High-consideration purchases (e.g., specific running shoes) shift from clicking through six potentially irrelevant product listing ads to receiving top-tier recommendations based on expert reviews.
  • Cognitive Load: The model handles the research, collapsing the average 14-click journey into one to two interactions.

Merchants face a tradeoff between distribution and control.

  • On ChatGPT: You gain access to early adopters, but lose the direct customer relationship and email marketing rights. You have no leverage over commission rates or recommendation logic.
  • On Google/Copilot: You retain merchant-of-record status, but as the funnel compresses, on-site ad inventory loses value. While conversion rates may rise, total ad revenue falls.

Affiliates die when LLMs disintermediate the click.

  • The Trap: If ChatGPT synthesizes reviews without sending traffic, affiliates stop writing. This creates an “ouroboros” where models train on their own AI-generated output.
  • The Pivot: Publishers must paywall premium content or charge merchants directly for reviews.

Amazon dominates on price and speed, but faces a business model conflict.

  • The Conflict: Retail margins are thin (~1%); profitability comes from the $60 billion advertising business.
  • The Risk: Amazon’s ad machine relies on a 14-click funnel. If conversational commerce compresses this to one click, sponsored product inventory evaporates.
  • The Choice: They must either block crawlers to protect ad revenue (current strategy) or participate and cannibalize it. Walmart joining ChatGPT forces their hand.

Google is best positioned to weather the shift.

  • Parity: They are already monetizing AI Overviews at parity with legacy search.
  • Economics: Higher relevance leads to exploding conversion rates. Advertisers will pay more per click to offset the lower click volume, balancing the ecosystem.

4. SEO Shifts From Optimizing Clicks To Optimizing Ingestion

We are moving from a world of infinite shelf space (10 blue links, endless pagination) to a world of constrained shelf space (three recommendation slots in an AI response).

In this environment, SEO shifts from optimizing for clicks to optimizing for ingestion. The goal isn’t to get a human to visit your landing page; it’s to get your product data into the agent’s context window with enough authority that it recommends you.

The New “Technical SEO”: Feed quality in the legacy model meant site speed, mobile responsiveness, and Core Web Vitals. In the protocol era, technical SEO is feed integrity. Agents don’t “browse” your site; they query your API. Your website becomes less of a visual destination and more of a structured database. The winners will be merchants who treat their product feed as their primary storefront.

The New “On-Page SEO”: Legacy SEO often rewarded articles that simply summarized what everyone else was already saying to rank for broad keywords. LLMs, however, are trained on that consensus. To be cited now, you must provide Information Gain, the delta between what the model already knows and the unique value you provide on top of the consensus.

  • You cannot “market” your way out of inferior specs. If you claim to be the “best running shoe for flat feet,” the model doesn’t look for adjectives; it validates your arch support measurements against podiatry standards in its training set.
  • Your content must shift from general engagement to structured “Product Truth.” LLMs prioritize detailed comparison tables, proprietary test results (e.g., “we dropped this phone 50 times”), and ingredient breakdowns. If your data isn’t structured for easy ingestion/verification, the model will bypass you for a source that is.

The New “Off-Page SEO”: Backlinks still matter, but their function changes. Instead of passing “link juice” for ranking, they now serve as verification sources for reputation synthesis, together with reviews and web mentions.

  • LLMs scrape third-party sites (e.g., Reddit, specialized forums, expert review sites) to form a consensus. A high volume of verified, specific reviews on trusted third-party platforms is the strongest signal you can send.
  • In a world where an AI suggests three options, brand familiarity becomes a tie-breaker. Brand advertising and organic brand building return as a critical lever to ensure users recognize the recommendation the AI provides.

5. The End Of “Marketing Brands”

The last decade allowed white-label brands to arbitrage their way to growth via ads, but agentic commerce acts as the quality filter for this model. While humans are swayed by slick branding, LLMs are dispassionate readers of data that will not recommend a “premium” product when the specs prove it is identical to a generic alternative.

The shift to protocols creates a paradox: Models understand long-tail intent perfectly but fulfill it with fat head inventory.

  • Safety Bias: Models prefer consensus to avoid hallucinations. A niche brand looks like noise; a Category King looks like truth.
  • The RAG Reality: RAG tools typically only scan the top 10-20 search results. Since search engines already favor authority, RAG often just reinforces the incumbents.

The only force that overrides this bias is granular data. Your merchant feed acts as the Claim, but RAG acts as the Trust Layer to verify it.

The market bifurcates:

  • The Incumbents win general intent via “trust” (consensus).
  • The Specialists win specific intent via “granularity” (specs), but only if they rank in the top search results.

If you expose data points the giants ignore (e.g., exact sourcing, chemical analysis), the model’s reasoning engine must select you to fulfill the constraint, but only if you rank on page 1 to be fetched.

Organic search is no longer about the click; it is the prerequisite for agentic verification.


Featured Image: Paulo Bobita/Search Engine Journal

Breaking Into The Black Box: Unlocking Meta’s Product-Level Ad Data

Ecommerce and Meta often go hand in hand. You can give Meta a 20,000-item catalog and a budget, and with its AI-powered Advantage+ campaigns, it’ll try to pair the right person with the right product, whether that’s a new customer or someone who’s already viewed those products before.

But what’s actually happening inside that ad? And is there a way to optimize this “black box” Dynamic Product Ad (DPA) format?

Advertisers can see ad-level performance, but have no platform-native insights on which specific products are being shown, clicked, or ignored within a broad DPA.

Is The Algorithm Making The Right Decisions?

That’s exactly the question we wanted to answer.

There are three common traps brands fall into:

1. Over-segmentation: Brands that want more insight break apart their catalog into niche product sets with tons of DPAs.

  • Pros: You can give each ad a bespoke name, which tells you exactly what’s being served. Nice!
  • Cons: This reduces data density and can kill ROI. There’s also a tendency to try to predict which audiences will respond to which products, which is no longer effective for most brands since Meta’s improved Andromeda updates

2. Convoluted reporting: Brands try to infer what products Meta is prioritizing by pairing Google Analytics 4 session data (sessions by product) to Meta ads data (the campaigns/ads that sent these users).

  • Pros: Enables some analysis without falling into the “over-segmentation” pitfall.
  • Cons: Time-consuming to set up, and incomplete. This method doesn’t tell us anything about product-specific engagement within Meta; we would only be guessing at click-through-rate, spend, and impressions.

3. “Set it and forget”: Brands give up all control and let Meta take the wheel.

  • Pros: Avoids over-segmentation issues.
  • Cons: There’s a big risk in trusting the algorithm. You might be pushing products that get high impressions but low sales, effectively burning your budget and losing efficiency.

Trying to make decisions from just Meta Ads Manager UI data is a risk. Many marketers are still not confident in AI-powered campaigns.

At my agency, we created technology to solve this challenge, but fear not, I can walk you through the exact steps so you can do the same for your brand.

Our pilot client for the new technology was a major bathroom retailer investing heavily in DPAs within conversion campaigns.

Let’s go through the three phases in our journey to overcoming this ecommerce challenge.

Phase One: Surfacing Engagement Data

The first stage was visibility: understanding what was happening now within these “black box” DPA formats.

As I said above, Meta doesn’t directly report which specific product led to a specific purchase within a DPA in the Ads Manager interface. It’s simply not an available breakdown in the same way that age, placement, etc. are offered.

But the good news is that a treasure trove of insight is buried in the Meta APIs:

  1. Meta Marketing API (specifically the Insights API) is the main API we use to get all ad performance data. It’s how we’re pulling the key metrics like spend, impressions, and clicks for each ad_id and product_id.
  2. Meta Commerce Platform API (or Catalog API). This API provides the list of all product_ids and their associated details (like name, price, category, etc.).

Here are the steps:

  1. You first need to pipe API data into a data warehouse (we used BigQuery). Make sure you’re pulling the following metrics from the Insights AP: impressions, clicks, spend, ad_id, product_id. If you aren’t a developer, you can use ETL connectors (like Supermetrics, Funnel.io) to get this data into BigQuery or Google Sheets, or use Python scripts if you have a data team.
  2. Once you have these two data streams, join these APIs in a table, using a specific Join Key. We used Product ID; this is the common thread that must exist in both the Ad data and the Catalog data to make the connection work.

Once you’ve done this, you can view your ad performance data (clicks, impressions), but now with a breakdown by product.

This new, combined dataset was then visualized in a Looker Studio report template. Again, other reporting options are available.

To make sense of the data, we needed an easily navigable report rather than pages of raw data. We built the following visualizations:

Screenshot of Product scatter chart from Impression DPEx tool
Product Scatter Chart, Impression Dynamic Product Explorer (DPEx), (Image from author, December 2025)

Product Scatter Chart: Separating each product into four distinct categories:

  • “Star Performers”: High impressions and high clicks.
  • “Promising Products”: Low impressions but a high click-through rate.
  • “Window Shoppers”: High impressions but very low clicks.
  • “Low Priority”: Low clicks and impressions.
Screenshot of DPEx chart
Top 10 Product Types Chart (Image from author, December 2025)
Screenshot of DPEx chart
Bottom 10 Product Types (Image from author, December 2025)

Top/Bottom Products Bar Charts: See at a glance the top 10 and bottom 10 products by engagement.

Product Details Table: View detailed metrics for each product.

This could all be filtered by product name, product type, availability, and any other metrics we wanted (color, price, etc.).

We produced our first-ever client report for product-level ad engagement, and even with just engagement data, we learned a lot:

Creative: We used the data to improve creative briefs.

  • In our client data report, it was interesting to see how much Meta was pushing non-white products (orange sinks, green baths), despite the fact that 95% of their product sales are traditional white variations.
  • We hadn’t prioritized these products initially for the client, but have now created lots more video and creator content featuring these highly clickable variations.

Product Segmentation: We built powerful, data-driven product sets based on real engagement metrics.

  • For example, we tested showing only our most engaging “Star Performer” products in feed-powered collection ads in our upper funnel campaigns, where usually the algorithm has fewer signals to optimize towards

Efficiency: This automated a complex analysis that was previously unwieldy and time-consuming.

Crucially, for the first time, we had enough evidence to challenge Meta’s “best practice” of using the widest possible product set.

Pitfalls & Key Considerations

This was a great first step, but we knew there were some key areas that just tapping into Meta’s APIs won’t solve:

  • Engagement Vs. Conversions: The major downfall with this is that product-level breakdowns are only available for clicks and impression data, not revenue or conversions. The “Window Shoppers” category, for example, identifies products that get low clicks, but we couldn’t (in this phase) definitively say they don’t lead to sales.
  • Context Is Key: This data is a powerful new diagnostic tool. It tells us what Meta is showing and what users are clicking, which is a huge step forward. The why (e.g., “is this high-impression, low-click item just a high-value product?”) still requires our team’s analysis.

Phase Two: Evolving Meta Engagement Data With GA4 Revenue Data

We knew the above Meta-only data just explores one part of the journey. To evolve, we needed to join with GA4 data to find out what customers are actually buying after they’re interacting with our feed-powered dynamic product ads.

The Technical Bridge: How We Joined the Data

While Phase One relied on ETL connectors to pull Meta’s API data, Phase Two requires a different stream for GA4. We tapped into the native GA4 BigQuery export specifically for purchase events. This provides the raw event-level data, revenue and units sold, for every transaction.

The join isn’t a single step – but relies on two primary keys to connect the datasets:

  • The Ad ID Bridge: To link a GA4 session back to a specific Meta ad, we captured the ad_id via dynamic UTM parameters. By setting your URL parameters to utm_content={{ad.id}}, you create a magic bridge between the click and the session.
  • The Item ID Match: Once the session is linked, we use the Item ID. This must be perfectly aligned so that your Meta product_id and GA4 item_id are identical; otherwise, the model breaks.

Pitfalls & Key Considerations

Joining Meta and GA4 data sounds easy enough, but there were some key blockers to overcome.

Clean Data. The whole model breaks if your Meta ID doesn’t cleanly match your GA4 IDs. You must ensure your product catalogs and your GA4 tagging are perfectly aligned before you start.

However, our second issue is harder to overcome: attribution issues. The GA4 data will almost always show lower conversion numbers than Meta’s UI.

This is because, in our experience, Meta often “over-credits.” It benefits from longer attribution windows, including view-through conversions, and it gives itself full credit for each conversion it measures (rather than spreading out across multiple channels).

GA4 often “under-credits” channels like Meta. It uses data-driven attribution to try and give credit to multiple touchpoints. However, it is unable to completely follow user journeys, especially those that don’t include clicks to the site. This means GA4 doesn’t know to credit a social ad, even if that ad was the deciding factor in the purchase journey.

Although we’d love to be able to get a 1:1 match from each product purchase back to a specific product interacted with on Meta, neither GA4 nor Meta can achieve this insight easily. However, there’s still value in the relative insights and trends.

Here’s an example:

  • Meta’s UI: Reported our “Luxury Bath – Green” product was our top performer last month, with high volumes of clicks and impressions in our dynamic ads.
  • The Problem: When we joined our GA4 data, we saw no sales for that specific bath last month, at all, from any channel!
  • The Assumption: If we only used ad engagement data, we’d assume this product is wasting spend by generating low-quality traffic

But, by looking at all items purchased in those GA4 sessions that originated from the “Luxury Bath – Green” product, we discover that many users who clicked the bath went on to convert, just for the white variation instead.

The Insight: The “Luxury Bath” ad wasn’t a failure; it was a highly effective halo product for our client. As a result, it drew in aspirational customers who then converted to buy other products.

The Action: We can confidently commission creator content, focusing on the green bath, to draw in new users even if we know users are likely to buy a different color when it comes to purchase.

Phase Three: Performance-Enhanced Feeds

Once we had this data at our fingertips, the temptation was to focus on it purely for insights and data.

The next level was even better, using this data to create automated supplementary feeds.

It was time to bring back those four product performance segments from our scatter charts.

Using our feed management tools, we pushed the product performance segments into our Meta product feed as new custom labels. This means we were able to dynamically set new product sets based on product performance, for example, a rule was created to Product Set where Custom Label 0 equals Star Performer.

We could then conduct the following product set tests:

  • “Window Shoppers”: (High impressions, low clicks/sales). Feed these into an exclusion set to understand if efficiency improves when we remove from the feed.
  • “Promising Products”: (High CTR, high CVR, low impressions). Feed these into a scaling set with more budget to understand if demand is hidden.
  • “Star Performers”: (High impressions, high clicks). Feed these into a retargeting set to recapture engaged users with our signature ranges.

Pitfalls & Key Considerations

The tests above are simply examples of hypotheses. However, your mileage will vary! We strongly recommend structured experimentation to understand impacts on overall performance.

Is Your Brand Ready To Break Out Of The ‘Black Box’?

You can partially break out of Meta’s “black box,” and this can be a strategic move for ecommerce brands.

The journey moves from surfacing basic engagement data (Phase One) to joining it with sales data for true, profit-driven insights (Phase Two), and ultimately, to automating your strategy with performance-enhanced feeds (Phase Three).

This is how you move from trusting the algorithm to challenging it with evidence. If you’re a decision-maker wondering where to start, here are the three questions to ask:

  1. “Can you show me which specific products in our catalog are being prioritized by Meta?”
  2. “Are our Meta product_ids and GA4 item_ids identical?”
  3. “Are we capturing the ad.id in our UTM parameters on every single ad?”

If the answers to these questions are “I don’t know,” you’re probably still operating inside the black box. Breaking it open is possible. It just requires the right data, the right technical expertise, and the will to finally see what’s truly driving performance.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

WP Go Maps Plugin Vulnerability Affects Up To 300K WordPress Sites via @sejournal, @martinibuster

A security advisory was published about a vulnerability affecting the WP Go Maps plugin for WordPress installed on over 300,000 websites. The flaw enables authenticated subscribers to modify map engine settings.

WP Go Maps Plugin

The WP Go Maps plugin is used by local business WordPress sites to display customizable maps on pages and posts, including contact page maps, delivery areas, and store locations. Site owners can manage map markers and map settings without writing code.

The plugin had four vulnerabilities in 2025 and seven vulnerabilities in 2024. Vulnerabilities were discovered in the previous years stretching back to 2019 but not as often.

Vulnerability

The vulnerability can be exploited by authenticated attackers with Subscriber-level access or higher. The Subscriber role is the lowest WordPress permission role. This means an attacker only needs a basic user account to exploit the issue but only if that account level is offered to users on affected websites.

The vulnerability is caused by a missing capability check in the plugin’s processBackgroundAction() function. A capability check is used to verify whether a logged-in user is allowed to perform a specific action. Because this check is missing, the function processes requests from users who do not have permission to change plugin settings.

As a result, authenticated attackers with Subscriber-level access can modify global map engine settings used by the plugin. These settings apply site-wide and affect how the plugin functions across the website.

Wordfence described the vulnerability as an unauthorized modification of data caused by a missing capability check. In practice, this means the plugin allows low-privileged users to change global settings that should be restricted to administrators.

The Wordfence advisory explains:

“The WP Go Maps (formerly WP Google Maps) plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the processBackgroundAction() function in all versions up to, and including, 10.0.04. This makes it possible for authenticated attackers, with Subscriber-level access and above, to modify global map engine settings”

Any site running an affected version of the plugin with subscriber level registration enabled is exposed to authenticated attackers.

The vulnerability affects all versions of WP Go Maps up to and including version 10.0.04. A patch is available. Site owners are recommended to update the WP Go Maps plugin to version 10.0.05 or newer to fix the vulnerability.

Featured Image by Shutterstock/Dean Drobot

Sam Altman Says OpenAI “Screwed Up” GPT-5.2 Writing Quality via @sejournal, @MattGSouthern

Sam Altman said OpenAI “screwed up” GPT-5.2’s writing quality during a developer town hall Monday evening.

When asked about user feedback that GPT-5.2 produces writing that’s “unwieldy” and “hard to read” compared to GPT-4.5, Altman was blunt.

He said:

“I think we just screwed that up. We will make future versions of GPT 5.x hopefully much better at writing than 4.5 was.”

Altman explained that OpenAI made a deliberate choice to focus GPT-5.2’s development on technical capabilities:

“We did decide, and I think for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing. And we have limited bandwidth here, and sometimes we focus on one thing and neglect another.”

How OpenAI Positioned Each Model

The contrast between GPT-4.5 and GPT-5.2 shows where OpenAI focused its resources.

When OpenAI introduced GPT-4.5 in February 2025, the company emphasized natural interaction and writing. OpenAI said interacting with GPT-4.5 “feels more natural” and called it “useful for tasks like improving writing.”

GPT-5.2’s announcement took a different direction. OpenAI positioned it as the most capable model series yet for professional knowledge work, with improvements in creating spreadsheets, building presentations, writing code, and handling complex, multi-step projects.

The release post spotlights spreadsheets, presentations, tool use, and coding. Writing appears more briefly, with technical writing noted as an improvement for GPT-5.2 Instant. But Altman’s comments suggest the overall writing experience still fell short for users comparing it to GPT-4.5.

Why This Matters

We’ve covered the iterative changes to ChatGPT since GPT-5 launched in August, including updates to warmth and tone and the GPT-5.1 instruction-following improvements. OpenAI regularly adjusts model behavior based on user feedback, and regressions in one area while improving another aren’t new.

What’s unusual is hearing Altman acknowledge a tradeoff this directly. For anyone using ChatGPT output in client-facing work, drafts, or polished writing, this explains why outputs may have changed. Model upgrades don’t guarantee improvement across every capability.

If you rely on ChatGPT for writing, treat model updates like any other dependency change. Re-test your prompts when defaults change, and keep a fallback if output quality matters for your workflow.

Looking Ahead

Altman said he believes “the future is mostly going to be about very good general purpose models” and that even coding-focused models should “write well, too.”

No timeline was given for when GPT-5.x writing improvements will ship. OpenAI typically iterates on model behavior through point releases, so changes could arrive gradually rather than in a single update.

Hear Altman’s full statement in the video below:


Featured Image: FotoField/Shutterstock

The Download: why LLMs are like aliens, and the future of head transplants

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the new biologists treating LLMs like aliens  

How large is a large language model? We now coexist with machines so vast and so complicated that nobody quite understands what they are, how they work, or what they can really do—not even the people who build them.

That’s a problem. Even though nobody fully understands how it works—and thus exactly what its limitations might be—hundreds of millions of people now use this technology every day. 

To help overcome our ignorance, researchers are studying LLMs as if they were doing biology or neuroscience on vast living creatures—city-size xenomorphs that have appeared in our midst. And they’re discovering that large language models are even weirder than they thought. Read the full story.

—Will Douglas Heaven

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

And mechanistic interpretability, the technique these researchers are using to try and understand AI models, is one of our 10 Breakthrough Technologies for 2026. Check out the rest of the list here!

Job titles of the future: Head-transplant surgeon

The Italian neurosurgeon Sergio Canavero has been preparing for a surgery that might never happen. His idea? Swap a sick person’s head—or perhaps just the brain—onto a younger, healthier body.

Canavero caused a stir in 2017 when he announced that a team he advised in China had exchanged heads between two corpses. But he never convinced skeptics that his technique could succeed—or to believe his claim that a procedure on a live person was imminent.

Canavero may have withdrawn from the spotlight, but the idea of head transplants isn’t going away. Instead, he says, the concept has recently been getting a fresh look from life-extension enthusiasts and stealth Silicon Valley startups. Read the full story.

—Antonio Regalado

This story is from the latest print issue of MIT Technology Review magazine, which is all about exciting innovations. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Big Tech is facing multiple high-profile social media addiction lawsuits 
Meta, TikTok and YouTube will face parents’ accusations in court this week. (WP $)
+ It’s the first time they’re defending against these claims before a jury in a court of law. (CNN)

2 Power prices are surging in the world’s largest data center hub
Virginia is struggling to meet record demand during a winter storm, partly because of the centers’ electricity demands. (Reuters)
+ Why these kinds of violent storms are getting harder to forecast. (Vox)
+ AI is changing the grid. Could it help more than it harms? (MIT Technology Review)

3 TikTok has started collecting even more data on its users
Including precise information about their location. (Wired $)

4 ICE-watching groups are successfully fighting DHS efforts to unmask them
An anonymous account holder sued to block ICE from identifying them—and won. (Ars Technica)

5 A new wave of AI companies want to use AI to make AI better
The AI ouroboros is never-ending. (NYT $)
+ Is AI really capable of making bona fide scientific advancements? (Undark)
+ AI trained on AI garbage spits out AI garbage. (MIT Technology Review)

6 Iran is testing a two-tier internet
Meaning its current blackout could become permanent. (Rest of World)

7 Don’t believe the humanoid robot hype
Even a leading robot maker admits that at best, they’re only half as efficient as humans. (FT $)
+ Tesla wants to put its Optimus bipedal machine to work in its Austin factory. (Insider)
+ Why the humanoid workforce is running late. (MIT Technology Review)

8 AI is changing how manufacturers create new products
Including thinner chewing gum containers and new body wash odors. (WSJ $)
+ AI could make better beer. Here’s how. (MIT Technology Review)

9 New Jersey has had enough of e-bikes 🚲
But will other US states follow its lead? (The Verge)

10 Sci-fi writers are cracking down on AI
Human-produced works only, please. (TechCrunch)
+ San Diego Comic-Con was previously a safe space for AI-generated art. (404 Media)
+ Generative AI is reshaping South Korea’s webcomics industry. (MIT Technology Review)

Quote of the day

“Choosing American digital technology by default is too easy and must stop.”

—Nicolas Dufourcq, head of French state-owned investment bank Bpifrance, makes his case for why Big European companies should use European-made software as tensions with the US rise, the Wall Street Journal reports.

One more thing

The return of pneumatic tubes

Pneumatic tubes were once touted as something that would revolutionize the world. In science fiction, they were envisioned as a fundamental part of the future—even in dystopias like George Orwell’s 1984, where they help to deliver orders for the main character, Winston Smith, in his job rewriting history to fit the ruling party’s changing narrative.

In real life, the tubes were expected to transform several industries in the late 19th century through the mid-20th. For a while, the United States took up the systems with gusto.

But by the mid to late 20th century, use of the technology had largely fallen by the wayside, and pneumatic tube technology became virtually obsolete. Except in hospitals. Read the full story.

—Vanessa Armstrong

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ You really can’t beat the humble jacket potato for a cheap, comforting meal. 
+ These tips might help you whenever anxiety strikes. ($)
+ There are some amazing photos in this year’s Capturing Ecology awards.
+ You can benefit from meditation any time, anywhere. Give it a go!

The power of sound in a virtual world

In an era where business, education, and even casual conversations occur via screens, sound has become a differentiating factor. We obsess over lighting, camera angles, and virtual backgrounds, but how we sound can be just as critical to credibility, trust, and connection.

That’s the insight driving Erik Vaveris, vice president of product management and chief marketing officer at Shure, and Brian Scholl, director of the Perception & Cognition Laboratory at Yale University. Both see audio as more than a technical layer: It’s a human factor shaping how people perceive intelligence, trustworthiness, and authority in virtual settings.

“If you’re willing to take a little bit of time with your audio set up, you can really get across the full power of your message and the full power of who you are to your peers, to your employees, your boss, your suppliers, and of course, your customers,” says Vaveris.

Scholl’s research shows that poor audio quality can make a speaker seem less persuasive, less hireable, and even less credible.

“We know that [poor] sound doesn’t reflect the people themselves, but we really just can’t stop ourselves from having those impressions,” says Scholl. “We all understand intuitively that if we’re having difficulty being understood while we’re talking, then that’s bad. But we sort of think that as long as you can make out the words I’m saying, then that’s probably all fine. And this research showed in a somewhat surprising way, to a surprising degree, that this is not so.”

For organizations navigating hybrid work, training, and marketing, the stakes have become high.

Vaveris points out that the pandemic was a watershed moment for audio technology. As classrooms, boardrooms, and conferences shifted online almost overnight, demand accelerated for advanced noise suppression, echo cancellation, and AI-driven processing tools that make meetings more seamless. Today, machine learning algorithms can strip away keyboard clicks or reverberation and isolate a speaker’s voice in noisy environments. That clarity underpins the accuracy of AI meeting assistants that can step in to transcribe, summarize, and analyze discussions.

The implications across industries are rippling. Clearer audio levels the playing field for remote participants, enabling inclusive collaboration. It empowers executives and creators alike to produce broadcast-quality content from the comfort of their home office. And it offers companies new ways to build credibility with customers and employees without the costly overhead of traditional production.

Looking forward, the convergence of audio innovation and AI promises an even more dynamic landscape: from real-time captioning in your native language to audio filtering, to smarter meeting tools that capture not only what is said but how it’s said, and to technologies that disappear into the background while amplifying the human voice at the center.

“There’s a future out there where this technology can really be something that helps bring people together,” says Vaveris. “Now that we have so many years of history with the internet, we know there’s usually two sides to the coin of technology, but there’s definitely going to be a positive side to this, and I’m really looking forward to it.

In a world increasingly mediated by screens, sound may prove to be the most powerful connector of all.

This episode of Business Lab is produced in partnership with Shure.

Full Transcript

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

This episode is produced in partnership with Shure.

Our topic today is the power of sound. As our personal and professional lives become increasingly virtual, audio is emerging as an essential tool for everything from remote work to virtual conferences to virtual happy hour. While appearance is often top of mind in video conferencing and streaming, audio can be as or even more important, not only to effective communication, but potentially to brand equity for both the speaker and the company.

Two words for you: crystal clear.

My guests today are Erik Vaveris, VP of Product Management and Chief Marketing Officer at Shure, and Brian Scholl, Director of the Perception & Cognition Laboratory at Yale University.

Welcome, Erik and Brian.

Erik Vaveris: Thank you, Megan. And hello, Brian. Thrilled to be here today.

Brian Scholl: Good afternoon, everyone.

Megan: Fantastic. Thank you both so much for being here. Erik, let’s open with a bit of background. I imagine the pandemic changed the audio industry in some significant ways, given the pivot to our modern remote hybrid lifestyles. Could you talk a bit about that journey and some of the interesting audio advances that arose from that transformative shift?

Erik: Absolutely, Megan. That’s an interesting thing to think about now being here in 2025. And if you put yourself back in those moments in 2020, when things were fully shut down and everything was fully remote, the importance of audio quality became immediately obvious. As people adopted Zoom or Teams or platforms like that overnight, there were a lot of technical challenges that people experienced, but the importance of how they were presenting themselves to people via their audio quality was a bit less obvious. As Brian’s noted in a lot of the press that he’s received for his wonderful study, we know how we look on video. We can see ourselves back on the screen, but we don’t know how we sound to the people with whom we’re speaking.

If a meeting participant on the other side can manage to parse the words that you’re saying, they’re not likely to speak up and say, “Hey, I’m having a little bit of trouble hearing you.” They’ll just let the meeting continue. And if you don’t have a really strong level of audio quality, you’re asking the people that you’re talking to devote way too much brainpower to just determining the words that you’re saying. And you’re going to be fatiguing to listen to. And your message won’t come across. In contrast, if you’re willing to take a little bit of time with your audio set up, you can really get across the full power of your message and the full power of who you are to your peers, to your employees, your boss, your suppliers, and of course your customers. Back in 2020, this very quickly became a marketing story that we had to tell immediately.

And I have to say, it’s so gratifying to see Brian’s research in the news because, to me, it was like, “Yes, this is what we’ve been experiencing. And this is what we’ve been trying to educate people about.” Having the real science to back it up means a lot. But from that, development on improvements to key audio processing algorithms accelerated across the whole AV industry.

I think, Megan and Brian, you probably remember hearing loud keyboard clicking when you were on calls and meetings, or people eating potato chips and things like that back on those. But you don’t hear that much today because most platforms have invested in AI-trained algorithms to remove undesirable noises. And I know we’re going to talk more about that later on.

But the other thing that happened, thankfully, was that as we got into the late spring and summer of 2020, was that educational institutions, especially universities, and also businesses realized that things were going to need to change quickly. Nothing was going to be the same. And universities realized that all classrooms were going to need hybrid capabilities for both remote students and students in the classroom. And that helped the market for professional AV equipment start to recover because we had been pretty much completely shut down in the earlier months. But that focus on hybrid meeting spaces of all types accelerated more investment and more R&D into making equipment and further developing those key audio processing algorithms for more and different types of spaces and use cases. And since then, we’ve really seen a proliferation of different types of unobtrusive audio capture devices based on arrays of microphones and the supporting signal processing behind them. And right now, machine-learning-trained signal processing is really the norm. And that all accelerated, unfortunately, because of the pandemic.

Megan: Yeah. Such an interesting period of change, as you say. And Brian, what did you observe and experience in academia during that time? How did that time period affect the work at your lab?

Brian: I’ll admit, Megan, I had never given a single thought to audio quality or anything like that, certainly until the pandemic hit. I was thrown into this, just like the rest of the world was. I don’t believe I’d ever had a single video conference with a student or with a class or anything like that before the pandemic hit. But in some ways, our experience in universities was quite extreme. I went on a Tuesday from teaching an in-person class with 300 students to being on Zoom with everyone suddenly on a Thursday. Business meetings come in all shapes and sizes. But this was quite extreme. This was a case where suddenly I’m talking to hundreds and hundreds of people over Zoom. And every single one of them knows exactly what I sound like, except for me, because I’m just speaking my normal voice and I have no idea how it’s being translated through all the different levels of technology.

I will say, part of the general rhetoric we have about the pandemic focuses on all the negatives and the lack of personal connection and nuance and the fact that we can’t see how everyone’s paying attention to each other. Our experience was a bit more mixed. I’ll just tell you one anecdote. Shortly after the pandemic started, I started teaching a seminar with about 20 students. And of course, this was still online. What I did is I just invited, for whatever topic we were discussing on any given day, I sent a note to whoever was the clear world leader in the study of whatever that topic was. I said, “Hey, don’t prepare a talk. You don’t have to answer any questions. But just come join us on Zoom and just participate in the conversation. The students will have read some of your work.”

Every single one of them said, “Let me check my schedule. Oh, I’m stuck at home for a year. Sure. I’d be happy to do that.” And that was quite a positive. The students got to meet a who’s who of cognitive science from this experience. And it’s true that there were all these technological difficulties, but that would never, ever have happened if we were teaching the class in real life. That would’ve just been way too much travel and airfare and hotel and scheduling and all of that. So, it was a mixed bag for us.

Megan: That’s fascinating.

Erik: Yeah. Megan, can I add?

Megan: Of course.

Erik: That is really interesting. And that’s such a cool idea. And it’s so wonderful that that worked out. I would say that working for a global company, we like to think that, “Oh, we’re all together. And we’re having these meetings. And we’re in the same room,” but the reality was we weren’t in the same room. And there hadn’t been enough attention paid to the people who were conferencing in speaking not their native language in a different time zone, maybe pretty deep into the evening, in some cases. And the remote work that everybody got thrown into immediately at the start of the pandemic did force everybody to start to think more about those types of interactions and put everybody on a level playing field.

And that was insightful. And that helped some people have stronger voices in the work that we were doing than they maybe did before. And it’s also led businesses really across the board, there’s a lot written about this, to be much more focused on making sure that participants from those who may be remote at home, may be in the office, may be in different offices, may be in different time zones, are all able to participate and collaborate on really a level playing field. And that is a positive. That’s a good thing.

Megan: Yeah. There are absolutely some positive side effects there, aren’t there? And it inspired you, Brian, to look at this more closely. And you’ve done a study that shows poor audio quality can actually affect the perception of listeners. So, I wonder what prompted the study, in particular. And what kinds of data did you gather? What methodology did you use?

Brian: Yeah. The motivation for this study was actually a real-world experience, just like we’ve been talking about. In addition to all of our classes moving online with no notice whatsoever, the same thing was true of our departmental faculty meetings. Very early on in the pandemic, we had one of these meetings. And we were talking about some contentious issue about hiring or whatever. And two of my colleagues, who I’d known very well and for many, many years, spoke up to offer their opinions. And one of these colleagues is someone who I’m very close with. We almost always see eye to eye. He was actually a former graduate student of mine once upon a time. And we almost always see eye to eye on things. He happened to be participating in that meeting from an old not-so-hot laptop. His audio quality had that sort of familiar tinny quality that we’re all familiar with. I could totally understand everything he was saying, but I found myself just being a little skeptical.

I didn’t find his points so compelling as usual. Meanwhile, I had another colleague, someone who I deeply respect, I’ve collaborated with, but we don’t always see eye to eye on these things. And he was participating in this first virtual faculty meeting from his home recording studio. Erik, I don’t know if his equipment would be up to your level or not, but he sounded better than real life. He sounded like he was all around us. And I found myself just sort of naturally agreeing with his points, which sort of was notable and a little surprising in that context. And so, we turned this into a study.

We played people a number of short audio clips, maybe like 30 seconds or so. And we had these being played in the context of very familiar situations and decisions. One of them might be like a hiring decision. You would have to listen to this person telling you why they think they might be a good fit for your job. And then afterwards, you had to make a simple judgment. It might be of a trait. How intelligent did that person seem? Or it might be a real-world decision like, “Hey, based on this, how likely would you be to pursue trying to hire them?” And critically, we had people listen to exactly the same sort of scripts, but with a little bit of work behind the scenes to affect the audio quality. In one case, the audio sounded crisp and clear. Recorded with a decent microphone. And here’s what it sounded like.

Audio Clip: After eight years in sales, I’m currently seeking a new challenge which will utilize my meticulous attention to detail and friendly professional manner. I’m an excellent fit for your company and will be an asset to your team as a senior sales manager.

Brian: Okay. Whatever you think of the content of that message, at least it’s nice and clear. Other subjects listened to exactly the same recording. But again, it had that sort of tinny quality that we’re all familiar with when people’s voices are filtered through a microphone or a recording setup that’s not so hot. That sounded like this.

Audio Clip: After eight years in sales, I’m currently seeking a new challenge which will utilize my meticulous attention to detail and friendly professional manner. I’m an excellent fit for your company and will be an asset to your team as a senior sales manager.

Brian: All right. Now, the thing that I hope you can get from that recording there is that although it clearly has this what we would call, as a technical term, a disfluent sound, it’s just a little harder to process, you are ultimately successful, right? Megan, Erik, you were able to understand the words in that second recording.

Megan: Yeah.

Erik: Mm-hmm.

Brian: And we made sure this was true for all of our subjects. We had them do word-for-word transcription after they made these judgments. And I’ll also just point out that this kind of manipulation clearly can’t be about the person themselves, right? You couldn’t make your voices sound like that in real world conversation if you tried. Voices just don’t do those sorts of things. Nevertheless, in a way that sort of didn’t make sense, that was kind of irrational because this couldn’t reflect the person, this affected all sorts of judgments about people.

So, people were judged to be about 8% less hirable. They were judged to be about 8% less intelligent. We also did this in other contexts. We did this in the context of dateability as if you were listening to a little audio clip from someone who was maybe interested in dating you, and then you had to make a judgment of how likely would you be to date this person. Same exact result. People were a little less datable when their audio was a little more tinny, even though they were completely understandable.

The experiment, the result that I thought was in some ways most striking is one of the clips was about someone who had been in a car accident. It was a little narrative about what had happened in the car accident. And they were talking as if to the insurance agent. They were saying, “Hey, it wasn’t my fault. This is what happened.” And afterwards, we simply had people make a natural intuitive judgment of how credible do you think the person’s story was. And when it was recorded with high-end audio, these messages were judged to be about 8% more credible in this context. So those are our experiments. What it shows really is something about the power of perception. We know that that sort of sound doesn’t reflect the people themselves, but we really just can’t stop ourselves from having those impressions made. And I don’t know about you guys, but, Erik, I think you’re right, that we all understand intuitively that if we’re having difficulty being understood while we’re talking, then that’s bad. But we sort of think that as long as you can make out the words I’m saying, then that’s probably all fine. And this research showed in a somewhat surprising way to a surprising degree that this is not so.

Megan: It’s absolutely fascinating.

Erik: Wow.

Megan: From an industry perspective, Erik, what are your thoughts on those study results? Did it surprise you as well?

Erik: No, like I said, I found it very, very gratifying because we invest a lot in trying to make sure that people understand the importance of quality audio, but we kind of come about that intuitively. Our entire company is audio people. So of course, we think that. And it’s our mission to help other people achieve those higher levels of audio in everything that they do, whether you’re a minister at a church or you’re teaching a class or you’re performing on stage. When I first saw in the news about Brian’s study, I think it was the NPR article that just came up in one of my feeds. I read it and it made me feel like my life’s work has been validated to some extent. I wouldn’t say we were surprised by it, but iIt made a lot of sense to us. Let’s put it that way.

Megan: And how-

Brian: This is what we’re hearing. Oh, sorry. Megan, I was going to say this is what we’re hearing from a lot of the audio professionals as they’re saying, “Hey, you scientists, you finally caught up to us.” But of course-

Erik: I wouldn’t say it that way, Brian.

Brian: Erik, you’re in an unusual circumstance because you guys think about audio every day. When we’re on Zoom, look, I can see the little rectangle as well as you can. I can see exactly how I look like. I can check the lighting. I check my hair. We all do that every day. But I would say most people really, they use whatever microphone came with their setup, and never give a second thought to what they sound like because they don’t know what they sound like.

Megan: Yeah. Absolutely.

Erik: Absolutely.

Megan: Avoid listening to yourself back as well. I think that’s common. We don’t scrutinize audio as much as we should. I wonder, Erik, since the study came out, how are you seeing that research play out across industry? Can you talk a bit about the importance of strong, clear audio in today’s virtual world and the challenges that companies and employees are facing as well?

Erik: Yeah. Sure, Megan. That’s a great question. And studies kind of back this up, businesses understand that collaboration is the key to many things that we do. They know that that’s critical. And they are investing in making the experiences for the people at work better because of that knowledge, that intuitive understanding. But there are challenges. It can be expensive. You need solutions that people who are going to walk into a room or join a meeting on their personal device, that they’re motivated to use and that they can use because they’re simple. You also have to overcome the barriers to investment. We in the AV industry have had to look a lot at how can we bring down the overall cost of ownership of setting up AV technology because, as we’ve seen, the prices of everything that goes into making a product are not coming down.

Simplifying deployment and management is critical. Beyond just audio technology, IoT technology and cloud technology for IT teams to be able to easily deploy and manage classrooms across an entire university campus or conference rooms across a global enterprise are really, really critical. And those are quickly evolving. And integrations with more standard common IT tools are coming out. And that’s one area. Another thing is just for the end user, having the same user interface in each conference room that is familiar to everyone from their personal devices is also important. For many, many years, a lot of people had the experience where, “Hey, it’s time we’re going to actually do a conference meeting.” And you might have a few rooms in your company or in your office area that could do that. And you walk into the meeting room. And how long does it take you to actually get connected to the people you’re going to talk with?

There was always a joke that you’d have to spend the first 15 minutes of a meeting working all of that out. And that’s because the technology was fragmented and you had to do a lot of custom work to make that happen. But these days, I would say platforms like Zoom and Teams and Google and others are doing a really great job with this. If you have the latest and greatest in your meeting rooms and you know how to join from your own personal device, it’s basically the same experience. And that is streamlining the process for everyone. Bringing down the costs of owning it so that companies can get to those benefits to collaboration is kind of the key.

Megan: I was going to ask if we could dive a little deeper into that kind of audio quality, the technological advancements that AI has made possible, which you did touch on slightly there, Erik. What are the most significant advancements, in your view? And how are those impacting the ways we use audio and the things we can do with it?

Erik: Okay. Let me try to break that down into-

Megan: That’s a big question. Sorry.

Erik: … a couple different sections. Yeah. No, and one that’s just so exciting. Machine-learning-based digital signal processing, or DSP, is here and is the norm now. If you think about the beginning of telephones and teleconferencing, just going way back, one of the initial problems you had whenever you tried to get something out of a dedicated handset onto a table was echo. And I’m sure we’ve all heard that at some point in our life. You need to have a way to cancel echo. But by the way, you also want people to be able to speak at the same time on both ends of a call. You get to some of those very rudimentary things. Machine learning is really supercharging those algorithms to provide better performance with fewer trade-offs, fewer artifacts in the actual audio signal.

Noise reduction has come a long way. I mentioned earlier on, keyboard sounds and the sounds of people eating, and how you just don’t hear that anymore, at least I don’t when I’m on conference calls. But only a few years ago, that could be a major problem. The machine-learning-trained digital signal processing is in the market now and it’s doing a better job than ever in removing things that you don’t want from your sound. We have a new de-verberation algorithm, so if you have a reverberant room with echoes and reflections that’s getting into the audio signal, that can degrade the experience there. We can remove that now. Another thing, the flip side of that is that there’s also a focus on isolating the sound that you do want and the signal that you do want.

Microsoft has rolled out a voice print feature in Teams that allows you, if you’re willing, to provide them with a sample of your voice. And then whenever you’re talking from your device, it will take out anything else that the microphone may be picking up so that even if you’re in a really noisy environment outdoors or, say, in an airport, the people that you’re speaking with are going to hear you and only you. And it’s pretty amazing as well. So those are some of the things that are happening today and are available today.

Another thing that’s emerged from all of this is we’ve been talking about how important audio quality is to the people participating in a discussion, the people speaking, the people listening, how everyone is perceived, but a new consumer, if you will, of audio in a discussion or a meeting has emerged, and that is in the form of the AI agent that can summarize meetings and create action plans, do those sorts of things. But for it to work, a clean transcription of what was said is already table stakes. It can’t garbled. It can’t miss key things. It needs to get it word for word, sentence for sentence throughout the entire meeting. And the ability to attribute who said what to the meeting participants, even if they’re all in the same room, is quickly upon us. And the ability to detect and integrate sentiment and emotion of the participants is going to become very important as well for us to really get the full value out of those kinds of AI agents.

So audio quality is as important as ever for humans, as Brian notes, in some ways more important because this is now the normal way that we talk and meet, but it’s also critical for AI agents to work properly. And it’s different, right? It’s a different set of considerations. And there’s a lot of emerging thought and work that’s going into that as well. And boy, Megan, there’s so much more we could say about this beyond meetings and video conferences. AI tools to simplify the production process. And of course, there’s generative AI of music content. I know that’s beyond the scope of what we’re talking about. But it’s really pretty incredible when you look around at the work that’s happening and the capabilities that are emerging.

Megan: Yeah. Absolutely. Sounds like there are so many elements to consider and work going on. It’s all fascinating. Brian, what kinds of emerging capabilities and use cases around AI and audio quality are you seeing in your lab as well?

Brian: Yeah. Well, I’m sorry that Brian himself was not able to be here today, but I’m an AI agent.

Megan: You got me for a second there.

Brian: Just kidding. The fascinating thing that we’re seeing from the lab, from the study of people’s impressions is that all of this technology that Erik has described, when it works best, it’s completely invisible. Erik, I loved your point about not hearing potato chips being eaten or rain in the background or something like that. You’re totally right. I used to notice that all the time. I don’t think I’ve noticed that recently, but I also didn’t notice that I haven’t noticed that recently, right? It just kind of disappears. The interesting thing about these perceptual impressions, we’re constantly drawing intuitive conclusions about people based on how they sound. And that might be a good thing or a bad thing when we’re judging things like trustworthiness, for example, on the basis of a short audio clip.

But clearly, some of these things are valid, right? We can judge the size of someone or even of an animal based on how they sound, right? A chihuahua can’t make the sound of a lion. A lion can’t make the sound of a chihuahua. And that’s always been true because we’re producing audio signals that go right into each other’s ears. And now, of course, everything that Erik is talking about, that’s not true. It goes through all of these different layers of technology increasingly fueled by AI. But when that technology works the best way, it’s as if it isn’t there at all and we’re just hearing each other directly.

Erik: That’s the goal, right? That it’s seamless open communication and we don’t have to think about the technology anymore.

Brian: It’s a tough business to be in, I think, though, Erik, because people have to know what’s going on behind the surface in order to value it. Otherwise, we just expect it to work.

Erik: Well, that’s why we try to put the logo of our products on the side of them so they show up in the videos. But yeah, it’s a good point.

Brian: Very good. Very good.

Erik: Yeah.

Megan: And we’ve talked about virtual meetings and conversations quite a bit, but there’s also streamed and recorded content, which are increasingly important at work as well. I wondered, Erik, if you could talk a bit about how businesses are leveraging audio in new ways for things like marketing campaigns and internal upskilling and training and areas like that?

Erik: Yeah. Well, one of the things I think we’ve all seen in marketing is that not everything is a high production value commercial anymore. And there’s still a place for that, for sure. But people tend to trust influencers that they follow. People search on TikTok, on YouTube for topics. Those can be the place that they start. And as technology’s gotten more accessible, not just audio, but of course, the video technology too, content creators can produce satisfying content on their own or with just a couple of people with them. And Brian’s study shows that it doesn’t really matter what the origins of the content are for it to be compelling.

For the person delivering the message to be compelling, the audio quality does have to hit a certain level. But because the tools are simpler to use and you need less things to connect and pull together a decent production system, creator-driven content is becoming even more and more integral to a marketing campaign. And so not just what they maybe post on their Instagram page or post on LinkedIn, for example, but us as a brand being able to take that content and use that actually in paid media and things like that is all entirely possible because of the overall quality of the content. So that’s something that’s been a trend that’s been in process really, I would say, maybe since the advent of podcasts. But it’s been an evolution. And it’s come a long, long way.

Another thing, and this is really interesting, and this hits home personally, but I remember when I first entered the workforce, and I hope I’m not showing my age too badly here, but I remember the word processing department. And you would write down on a piece of paper, like a memo, and you would give it to the word processing department and somebody would type it up for you. That was a thing. And these days, we’re seeing actually more and more video production with audio, of course, transfer to the actual producers of the content.

In my company, at Shure, we make videos for different purposes to talk about different initiatives or product launches or things that we’re doing just for internal use. And right now, everybody, including our CEO, she makes these videos just at her own desk. She has a little software tool and she can show a PowerPoint and herself and speak to things. And with very, very limited amount of editing, you can put that out there. And I’ve seen friends and colleagues at other companies in very high-level roles just kind of doing their own production. Being able to buy a very high quality microphone with really advanced signal processing built right in, but just plug it in via USB and have it be handled as simply as any consumer device, has made it possible to do really very useful production where you are going to actually sound good and get your message across, but without having to make such a big production out of it, which is kind of cool.

Megan: Yeah. Really democratizes access to sort of creating high quality content, doesn’t it? And of course, no technology discussion is complete without a mention of return on investment, particularly nowadays. Erik, what are some ways companies can get returns on their audio tech investments as well? Where are the most common places you see cost savings?

Erik: Yeah. Well, we collaborated on a study with IDC Research. And they came up with some really interesting findings on this. And one of them was, no surprise, two-thirds or more of companies have taken action on improving their communication and collaboration technology, and even more have additional or initial investments still planned. But the ROI of those initiatives isn’t really tied to the initiative itself. It’s not like when you come out with a new product, you look at how that product performs, and that’s the driver of your ROI. The benefits of smoother collaboration come in the form of shorter meetings, more productive meetings, better decision-making, faster decision-making, stronger teamwork. And so to build an ROI model, what IDC concluded was that you have to build your model to account for those advantages really across the enterprise or across your university, or whatever it may be, and kind of up and down the different set of activities where they’re actually going to be utilized.

So that can be complex. Quantifying things can always be a challenge. But like I said, companies do seem to understand this. And I think that’s because, this is just my hunch, but because everybody, including the CEO and the CFO and the whole finance department, uses and benefits from collaboration technology too. Perhaps that’s one reason why the value is easier to convey. Even if they have not taken the time to articulate things like we’re doing here today, they know when a meeting is good and when it’s not good. And maybe that’s one of the things that’s helping companies to justify these investments. But it’s always tricky to do ROI on projects like that. But again, focusing on the broader benefits of collaboration and breaking it down into what it means for specific activities and types of meetings, I think, is the way to go about doing that.

Megan: Absolutely. And Brian, what kinds of advancements are you seeing in the lab that perhaps one day might contribute to those cost savings?

Brian: Well, I don’t know anything about cost savings, Megan. I’m a college professor. I live a pure life of the mind.

Megan: Of course.

Brian: ROI does not compute for me. No, I would say we are in an extremely exciting frontier right now because of AI and many different technologies. The studies that we talked about earlier, in one sense, they were broad. We explored many different traits from dating to hiring to credibility. And we isolated them in all sorts of ways we didn’t talk about. We showed that it wasn’t due to overall affect or pessimism or something like that. But in those studies, we really only tested one very particular set of dimensions along which an audio signal can vary, which is some sort of model of clarity. But in reality, the audio signal is so multi-dimensional. And as we’re getting more and more tools these days, we can not only change audio along the lines of clarity, as we’ve been talking about, but we can potentially manipulate it in all sorts of ways.

We’re very interested in pushing these studies forward and in exploring how people’s sort of brute impressions that they make are affected by all sorts of things. Meg and Erik, we walk around the world all the time making these judgments about people, right? You meet someone and you’re like, “Wow, I could really be friends with them. They seem like a great person.” And you know that you’re making that judgment, but you have no idea why, right? It just seems kind of intuitive. Well, in an audio signal, when you’re talking to someone, you can think of, “What if their signal is more bass heavy? What if it’s a little more treble heavy? What if we manipulate it in this way? In that way?”

When we talked about the faculty meeting that motivated this whole research program, I mentioned that my colleague, who was speaking from his home recording studio, he actually didn’t sound clear like in real life. He sounded better than in real life. He sounded like he was all around us. What is the implication of that? I think there’s so many different dimensions of an audio signal that we’re just being able to readily control and manipulate that it’s going to be very exciting to see how all of these sorts of things impact our impressions of each other.

Megan: And there may be some overlap with this as well, but I wondered if we could close with a future forward look, Brian. What are you looking forward to in emerging audio technology? What are some exciting opportunities on the horizon, perhaps related to what you were just talking about there?

Brian: Well, we’re interested in studying this from a scientific perspective. Erik, you talked about how when you started. When I started doing this science, we didn’t have a word processing department. We had a stone tablet department. But I hear tell that the current generation, when they send photos back and forth to each other, that they, as a matter, of course, they apply all sorts of filters-

Erik: Oh, yes.

Brian: … to those video signals, those video or just photographic signals. We’re all familiar with that. That hasn’t quite happened with the audio signals yet, but I think that’s coming up as well. You can imagine that you record yourself saying a little message and then you filter it this way or that way. And that’s going to become the Wild West about the kinds of impressions we make on each other, especially if and when you don’t know that those filters have been operating in the first place.

Megan: That’s so interesting. Erik, what are you looking forward to in audio technology as well?

Erik: Well, I’m still thinking about what Brian said.

Megan: Yeah. That’s-

Erik: That’s very interesting.

Megan: It’s terrifying.

Erik: I have to go back again. I’ll go back to the past, maybe 15 to 20 years. And I remember at work, we had meeting rooms with the Starfish phones in the middle of the table. And I remember that we would have international meetings with our partners there that were selling our products in different countries, including in Japan and in China, and the people actually in our own company in those countries. We knew the time zone was bad. And we knew that English wasn’t their native language, and tried to be as courteous as possible with written materials and things like that. But I went over to China, and I had to actually be on the other end of one of those calls. And I’m a native English speaker, or at least a native Chicago dialect of American English speaker. And really understanding how challenging it was for them to participate in those meetings just hit me right between the eyes.

We’ve come so far, which is wonderful. But I think of a scenario, and this is not far off, there are many companies working on this right now, where not only can you get a real time captioning in your native language, no matter what the language of the participant, you can actually hear the person who’s speaking’s voice manipulated into your native language.

I’m never going to be a fluent Japanese or Chinese speaker, that’s for sure. But I love the thought that I could actually talk with people and they could understand me as though I were speaking their native language, and that they could communicate to me and I could understand them in the way that they want to be understood. I think there’s a future out there where this technology can really be something that helps bring people together. Now that we have so many years of history with the internet, we know there’s usually two sides to the coin of technology, but there’s definitely going to be a positive side to this, and I’m really looking forward to it.

Megan: Gosh, that sounds absolutely fascinating. Thank you both so much for such an interesting discussion.

That was Erik Vaveris, the VP of product management and chief marketing officer at Shure, and Brian Scholl, director of the Perception & Cognition Laboratory at Yale University, whom I spoke with from Brighton in England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. And this episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.