AI Assistants Show Significant Issues In 45% Of News Answers via @sejournal, @MattGSouthern

Leading AI assistants misrepresented or mishandled news content in nearly half of evaluated answers, according to a European Broadcasting Union (EBU) and BBC study.

The research assessed free/consumer versions of ChatGPT, Copilot, Gemini, and Perplexity, answering news questions in 14 languages across 22 public-service media organizations in 18 countries.

The EBU said in announcing the findings:

“AI’s systemic distortion of news is consistent across languages and territories.”

What The Study Found

In total, 2,709 core responses were evaluated, with qualitative examples also drawn from custom questions.

Overall, 45% of responses contained at least one significant issue, and 81% had some issue. Sourcing was the most common problem area, affecting 31% of responses at a significant level.

How Each Assistant Performed

Performance varied by platform. Google Gemini showed the most issues: 76% of its responses contained significant problems, driven by 72% with sourcing issues.

The other assistants were at or below 37% for major issues overall and below 25% for sourcing issues.

Examples Of Errors

Accuracy problems included outdated or incorrect information.

For instance, several assistants identified Pope Francis as the current Pope in late May, despite his death in April, and Gemini incorrectly characterized changes to laws on disposable vapes.

Methodology Notes

Participants generated responses between May 24 and June 10, using a shared set of 30 core questions plus optional local questions.

The study focused on the free/consumer versions of each assistant to reflect typical usage.

Many organizations had technical blocks that normally restrict assistant access to their content. Those blocks were removed for the response-generation period and reinstated afterward.

Why This Matters

When using AI assistants for research or content planning, these findings reinforce the need to verify claims against original sources.

As a publication, this could impact how your content is represented in AI answers. The high rate of errors increases the risk of misattributed or unsupported statements appearing in summaries that cite your content.

Looking Ahead

The EBU and BBC published a News Integrity in AI Assistants Toolkit alongside the report, offering guidance for technology companies, media organizations, and researchers.

Reuters reports the EBU’s view that growing reliance on assistants for news could undermine public trust.

As EBU Media Director Jean Philip De Tender put it:

“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”


Featured Image: Naumova Marina/Shutterstock

YouTube Expands Likeness Detection To All Monetized Channels via @sejournal, @MattGSouthern

YouTube is beginning to expand access to its likeness detection tool to all channels in the YouTube Partner Program over the next few months.

The technology helps you identify unauthorized videos where your facial likeness has been altered or generated with AI.

YouTube announced the expansion after testing the tool with a small group of creators.

The tool addresses a growing concern as AI-generated content becomes more sophisticated and accessible.

How Likeness Detection Works

Channels can access the tool through YouTube Studio’s content detection tab under a new likeness section.

The onboarding process requires identity verification. You scan a QR code with your phone’s camera, then submit a photo ID and record a brief selfie video performing specific motions.

YouTube processes this information on Google servers, typically granting access within a few days.

Once verified, creators see a dashboard displaying videos that match their facial likeness. The interface shows video titles, upload dates, upload channels, view counts, and subscriber numbers. YouTube’s systems flag some matches as higher priority for review.

Taking Action On Detected Content

You have three options when reviewing matches.

You can request removal under YouTube’s privacy guidelines, submit a copyright claim, or archive the video without action. The tool automatically fills legal name and email information when starting a removal request.

Privacy removal requests apply to altered or synthetic content that violates specific criteria. YouTube’s announcement highlighted two examples: AI-generated videos showing creators endorsing political candidates, and infomercials with creators’ faces added through AI.

Copyright claims follow different rules and must consider fair use exceptions. Videos using short clips from a creator’s channel may not qualify for privacy removal but could warrant copyright action.

See a demonstration in the video below:

Policy Differences

YouTube stressed the distinction between privacy and copyright policies.

Privacy policy violations involve altered or synthetic content judged against criteria including whether the content is parody, satire, or includes AI disclosure. Copyright infringement covers unauthorized use of original content, including cropped videos to avoid detection or videos with changed audio.

The tool surfaces some short clips from creators’ own channels. These don’t qualify for privacy removal but may be eligible for copyright claims if fair use doesn’t apply.

Why This Matters

This gives YouTube Partner Program creators direct control over how AI-generated content uses their likeness.

Monetized channels can now monitor unauthorized deepfakes and request removal when videos mislead the audience about endorsements or statements that were never made.

Looking Ahead

The tool will roll out to eligible creators over the next few months. Those who see no matches shouldn’t be concerned. YouTube says this indicates no detected unauthorized use of their likeness on the platform.

Channels can withdraw consent and stop using the tool at any time through the manage likeness detection settings.

PPC Trends 2026: AI, Automation, And The Fight For Visibility via @sejournal, @MattGSouthern

If you manage PPC campaigns, you’ve seen it. Platforms are making more decisions without asking you first.

Campaign types keep consolidating into AI-first formats like Performance Max and Demand Gen. The granular controls you used to rely on keep disappearing or moving behind automation.

A year ago, Performance Max still felt experimental. Now it’s often the default option, with AI generating ad copy, and automation selecting audiences based on signals you can’t always see. When performance drops, you have fewer levers to pull and less visibility into what’s actually happening.

It can be disorienting to some, and the trend isn’t reversing.

We asked PPC professionals how they’re navigating this shift. Most aren’t pessimistic about AI-first campaigns. Many have found ways to work with platform automation without surrendering the strategic thinking that drives results.

You can use AI tools without losing your expertise in the process.

4 Key Findings From Industry Professionals

We surveyed professionals from agency, platform, and consultancy backgrounds for this year’s report. Clear patterns emerged in how they’re adapting to AI-first campaign management.

1. AI Tools Save Time But Still Need Babysitting

Most professionals now use AI daily for tasks like keyword research and ad copy variations. The tools are good enough to integrate into workflows.

But there’s a catch. Over half identify “inaccurate, unreliable, or inconsistent output quality” as the biggest limitation. AI accelerates production, but it hasn’t replaced the need for human oversight.

One contributor noted that in regulated industries where legal review is required, AI outputs often can’t be used without heavy editing.

The professionals who get results are the ones treating AI as an assistant, not a replacement.

2. “Control” Means Something Different Now

You can’t control exact search terms the way you used to. You can’t set precise bids on individual keywords or force campaigns to follow rigid parameters.

Several contributors argue you still have meaningful control, it just operates differently than before. One Google Ads coach compared it to giving a teenager the destination address and trusting they can navigate there, even if they take a few wrong turns along the way.

The new version of control means setting clear business objectives and providing high-quality conversion data. If your conversion tracking is messy or incomplete, AI will optimize toward the wrong goals.

3. Measurement Got More Honest (And More Uncomfortable)

Cookie deprecation was canceled in Chrome, but measurement challenges haven’t disappeared. What’s changed is how practitioners talk about attribution.

One agency founder admitted that focusing too heavily on perfect attribution might have been a strategic mistake. “Your marketing strategy should hold up even if granular tracking disappears.”

Other contributors emphasize that first-party data collection with proper consent is now essential for survival, especially in lead generation models.

Revenue remains the most reliable source of truth when platform-reported metrics conflict.

The most durable measurement approach involves choosing a limited set of reliable lenses rather than attempting to reconcile data from every available source.

4. Platform-Generated Creative Performs Better Than You’d Think

This finding surprises people. Several contributors report that AI-generated creative assets can perform competitively with human-created versions when they’re prompted effectively.

But “when prompted effectively” is doing substantial work in that sentence.

Quality depends heavily on how well you prompt the tools and how much brand context you provide. The tools still struggle with maintaining consistent brand voice and meeting legal compliance requirements in regulated industries.

Visual generation continues to need improvement, though contributors note it’s getting better for ecommerce product photography.

Most teams have settled on a hybrid workflow where AI handles idea generation and creates variations while humans manage final approval and anything requiring nuanced brand voice.

What Makes This Report Different

Previous years focused on specific platform changes or new features. This year’s questions dig into strategy.

How do you maintain visibility when platforms reduce transparency? What measurement techniques still work when attribution is murky? How do you adapt creative workflows when AI can generate assets on demand?

The contributors include:

  • Brooke Osmundson, Director of Growth Marketing, Smith Micro Software.
  • Gil Gildner, Agency Co-Founder, Discosloth.
  • Navah Hopkins, Product Liaison, Microsoft.
  • Jonathan Kagan, Director of Search & Media Strategy, Amsive.
  • Mike Ryan, Head of Ecommerce Insights, Smarter Ecommerce.
  • Jyll Saskin Gales, Google Ads Coach, Inside Google Ads.

The answers reflect an industry adapting in real time. Some contributors have embraced AI-first workflows fully, while others remain cautious about surrendering too much control. All are experimenting constantly because the platforms aren’t slowing down.

Why Download This Now

If you’re managing campaigns, you’re already wrestling with these challenges. Are you approaching them with a clear strategy, or just reacting to each platform change as it happens?

This report will show you how experienced professionals at agencies, platforms, and consultancies are thinking through the same problems you’re facing right now.

Download PPC Trends 2026 to see how industry professionals are adapting their strategies, maintaining accountability in automated campaigns, and finding ways to make AI-first advertising work without losing the strategic expertise that separates successful campaigns from mediocre ones.

PPC Trends 2026


Featured Image: Paulo Bobita/Search Engine Journal

How And Why Google Rewrites Your Hard-Earned Headlines

TL;DR

  1. Google can and does rewrite headlines and titles frequently. Almost anything on your page could be used.
  2. The title is not all that matters. The entirety of your page – from the title to the on-page content – should remove ambiguity.
  3. The title tag is the most important term. Stick to 12 words and 600 pixels to avoid truncation and maximize value from each word.
  4. Google uses three rough concepts – Semantic title and content alignment, satisfactory click behavior, and searcher intent alignment – for this.
Image Credit: Harry Clarkson-Bennett

This is based on the Google leak documentation and Shaun Anderson’s excellent article on title tag rewriting. I’ve jazzed it to make it more news and publisher-specific.

“On average, five times as many people read the headline as read the body copy.”
David Ogilvy

No idea if that’s true or not.

I’m sure it’s some old-age advertising BS. But alongside the featured image, it is our shop window. Headlines are the gatekeepers. They need to be clickable, work for humans and machines, and prioritize clarity.

So, when you’ve spent a long time crafting a headline for your own story, why-oh-why does Google mess you around?

I’m sure you get a ton of questions from other people in the SEO team and the wider newsroom (or the legal team) about this.

Something like:

Why is our on-page headline being pulled into the SERP?

Or

We can just have the same on-page headline and title tag, can’t we? Why does it matter?

You could rinse and repeat this conversation and theory for almost anything. Meta descriptions are the most obvious situation, where some research shows they’re rewritten 70% of the time. The answer will, unfortunately, always be that, because Google can and does do what it wants.

But it helps to know the what and the why when having these conversations.

Mark Williams-Cook and team did some research to show that up to 80% of meta descriptions were being rewritten and the rewriting increased traffic. Maybe the machine knows best after all.

Why Does Google Rewrite Title Tags?

The search giant uses document understanding, query matching, content rewriting, and user engagement data to determine when a title or H1 should be changed in SERPs.

It rewrites them because it knows what is best satisfying users in real time. An area of search where we as publishers are at the bleeding edge. When you have access to that much data and you take a share of ad revenue, it would be a little obtuse not to optimize for clicks in real-time.

Image Credit: Harry Clarkson-Bennett

Does Length Matter?

No innuendos, please; this is a professional newsletter.

Google’s official documentation doesn’t define a limit for title tags. I think it’s just based on the title becoming truncated. Given Google now rewrites so much, longer, more keyword-rich and descriptive titles, longer titles could help with ranking in Top Stories and traditional search results.

According to Gary Illyes, there is real value in having longer title tags:

“The title tag (length), is an externally made-up metric. Technically there’s a limit, but it’s not a small number…

Try to keep it precise to the page, but I wouldn’t think about whether it’s long enough…”

Sara Taher ran some interesting analysis (albeit on evergreen content only) that showed the average title length falls between 42-46 characters. If titles are too long, Google will probably cut them off or rewrite them. Precision matters for evergreen search.

What Are The Key Determinants?

Based on the Google leak and Shaun’s analysis, I’d say there are three concepts at play Google uses to determine whether a title should be rewritten. I have made this up, by the way, so feel free to use your own.

  • Semantic title and content alignment.
  • Satisfactory click behavior.
  • Searcher intent alignment.

Semantic Title And Content Alignment

This is undoubtedly the most prominent section. Your on-page content and title/headline have to align.

This is why clickbait content and content written directly for Google Discover is so risky. Because you’re writing a cheque that you can’t cash. Create content specifically for a platform like Discover, and you will erode your quality signals over time.

Image Credit: Harry Clarkson-Bennett

The titlematchScoreh1ContentScore, and spammyTitleDetection review the base quality of a headline based on the page’s content and query intent. Mismatched titles, headlines, and keyword-stuffed versions are, at best, rewritten.

At worst, they downgrade the quality of your site algorithmically.

The titleMatchAnchorText ensures our title tags and header(s) are compared to internal and external anchors and evaluated in comparison to the hierarchy of the page (the headingHierarchyScore).

Finally, the “best” title is chosen from on-page elements via the snippetTitleExtraction. While Google primarily uses the title or H1 tag, any visible element can be used if it “best represents the page.”

Satisfactory Click Behavior

Much more straightforward. Exactly how Google uses user engagement signals (think of Navboost’s good vs bad click signals) to best cultivate a SERP for a particular term and cohort of people.

Image Credit: Harry Clarkson-Bennett

The titleClickSatisfaction metric combines click data at a query level with on-page engagement data (think scroll depth, time on page, on-page interactions, pogo-sticking).

So, ranking adjustments are made if Google believes the title used in the SERP is underperforming against your prior performance and the competition. So, the title you see could be one of many tests happening simultaneously, I suspect.

For those unfamiliar with Navboost, it is one of Google’s primary ranking engines. It’s based on user interaction signals, like clicks, hovers, scrolls, and swipes, over 13 months to refine rankings.

For news publishers, Glue helps rank content in real time for fresh, real-time events. Source and page level authority. It’s a fundamental part of how news SEO really works.

Searcher Intent Alignment

Searcher intent really matters when it comes to page titles. And Google knows this far better than we do. So, if the content on your page (headings, paragraphs, images, et al.) and search intent isn’t reflected by your page title, it’s gone.

Image Credit: Harry Clarkson-Bennett

Once a page title has been identified as not fit for purpose, the pageTitleRewriter metric is designed to rewrite “unhelpful or misleading page titles.”

And page titles are rewritten at a query level. The queryIntentTitleAlignment measures how the page title aligns with searcher intent. Once this is established, the page alignment and query intent are reviewed to ensure the title best reflects the page at a query level.

Then the queryDependentTitleSelection adjusts the title based on the specifics of the search and searcher. Primarily at the query and location-level. The best contextual match is picked.

Suggestions For Publishers

I’ll try to do this (in a vague order of precedence):

  1. Make your title stand out. Be clickable. Front-load entities. Use power words, numbers, or punctuation where applicable.
  2. Stick to 12 words and 600 pixels to avoid truncation and maximize value from each word.
  3. Your title tag better represent the content on your page effectively for people and machines.
  4. Avoid keyword stuffing. Entities in headlines = good. Search revolves around entities. People, places, and organizations are the bedrock of search and news in particular. Just don’t overdo it.
  5. Do not lean too heavily into clickbait headlines. There’s a temptation to do more for Discover at the minute. The headlines on that platform tend to sail a little too close to the clickbait wind.
  6. Make sure your title best reflects the user intent and keep things simple. The benefit of search is that people are directly looking for an answer. Titles don’t always have to be wildly clicky, especially with evergreen content. Simple, direct language helps pass titleLanguageClarity checks and reduces truncation
  7. Utilize secondary (H2s) and tertiary (H3s) headings on your page. This has multiple benefits. A well broken-up page encourages quality user engagement. It increases the chances of your article ranking for longer-tail queries. And, it helps provide the relevant context to your page for Google.
  8. Monitor CTR and run headline testing on-site. If you have the capacity to run headline testing in real-time, fantastic. If not, I suggest taking headline and CTR data at scale and building a model that helps you understand what makes a headline clickable at a subfolder or topic level. Do emotional, first-person headlines with a front-loaded entity perform best in /politics, for example?
  9. Control your internal anchor text. Particularly important for evergreen content. But even with news, there are five headlines to pay attention to. And internal links (and their anchors) are a pivotal one. The matching anchor text reinforces trust in the topic.

If you are looking into developing your Discover profile, I would recommend testing the OG title if you want to test “clickier” headlines that aren’t visible on page.

Final Thoughts

So, the goal isn’t just to have a well-crafted headline. The goal is to have a brilliant set of titles – clickable, entity and keyword rich, highly relevant. As Shaun says, it’s to create a constellation of signals – the , the </p> <h1>, the URL, the intro paragraph – that remove all ambiguity.</h1> <p>

As ever, clicks are an immensely powerful signal. Google has more data points than I’ve had hot dinners, so had a pretty good ideas what will do well. But real clicks can override this. The goldmineNavboostFactor is proof that click behavior influences which title is displayed.

The title tag is the most important headline on the page when it comes to search. More so than the

. But they have to work together. To draw people in and engage them instantly.

But it all matters. Removing ambiguity is always a good thing. Particularly in a world of AI slop.

More Resources: 


This post was originally published on Leadership In SEO.


Featured Image: Billion Photos/Shutterstock

SEO Is Not A Tactic. It’s Infrastructure For Growth via @sejournal, @billhunt

In the age of AI, many companies still treat SEO as a bolt-on tactic, something to patch in after the website is designed, the content is written, and the campaigns are launched. As I explored in “Why Your SEO Isn’t Working – And It’s Not the Team’s Fault,” the real obstacles aren’t a lack of knowledge or talent. They’re embedded in how companies structure ownership, prioritize resources, and treat SEO as a tactic. It’s infrastructure. And unless it’s treated as such, most organizations will never realize their full growth potential.

Search is no longer about reacting to keywords; it’s about structuring your entire digital presence to be discoverable, interpretable, and aligned with the customer journey. When done right, SEO becomes the connective tissue across content, product, and performance marketing.

Effectively Engage Intent-Driven Prospects

As I first argued in my 1994 business school thesis, and still believe today, search is the best opportunity companies have to engage “interest-driven” prospects. These are people actively declaring their needs, preferences, and intentions via a search interface. All we have to do is listen and nurture them in their journey.

When organizations structure content and infrastructure to meet that demand, they not only reduce friction – they unlock scalable demand capture.

Search:

  • Works across the funnel: awareness, consideration, conversion.
  • Reduces customer acquisition cost (CAC) by meeting customers on their terms.
  • Surfaces unmet demand signals that never show up in customer relationship management (CRM).
  • Reveals how people describe, evaluate, and compare products.
  • Can be a cost-effective tactic for removing friction by matching sales and marketing content precisely with the needs of the person seeking it.

In short, SEO gives you real-time visibility into what people want and how to serve them better. But only if the business treats it as a growth engine – not a last-minute add-on.

Case In Point: Search Left Out Of The Business

In one engagement, we analyzed 2.8 million keywords for a large enterprise with a $50 million PPC budget. The goal? Understand how well they were showing up across the full buying journey. This was a significant data and mathematical problem. For each product or service, we identified the buyer’s journey from awareness to support. We then created a series of rules to develop and classify queries representing searchers in each phase.

We could easily see the query chains of users from their first discovery query all the way through the buy cycle until they were looking for support information. It wasn’t perfect, but it did capture over 100 patterns of content types sought in different phases. By monitoring these pages and user paths, we were better able to satisfy their information needs and convert them into customers.

We checked organic rank: If the page wasn’t in the top five or had a paid ad, we counted it as having no exposure. Once we had the full picture, we saw the dysfunction clearly:

  • In the critical early non-branded discovery phase, we had no presence for nearly 400 million queries related to technologies the company sold.
  • Even more shocking, we missed 93% of 130 million queries tied to implementation-specific searches – like power specs, BTU requirements, or images for engineering diagrams.

The content existed, but it was buried in PDFs or trapped in crawl-unfriendly support sections. These were highly motivated searchers building proposals or writing budget justifications. We were making it hard for them to find what they needed.

To build our business case for change, we took all of these queries and layered in marketing qualified lead (MQL) and sales qualified lead (SQL) metrics to quantify the potential missed opportunity. Using conservative assumptions to avoid executive panic, we demonstrated that this gap represented over $580 million in unrealized revenue.

This wasn’t a content gap – it was a mindset and infrastructure failure. Search wasn’t seen as a system. It wasn’t connected to growth.

SEO As Strategic Growth Infrastructure

But what we uncovered wasn’t just a content gap but a mindset and infrastructure failure. Search wasn’t seen as a system. It wasn’t connected to growth. Organic search had been siloed into a tactical role, and paid search was framed as an acquisition driver, both disconnected from each other and from how the business grows. The result? A website optimized for internal org charts, not for how customers think, search, and decide. This is where the true value of SEO as infrastructure comes into focus. It’s not just about saving money on media; it’s about building systems that align with the full buyer journey.

When SEO is embedded into product planning, content creation, and experience design, you don’t just show up more often. You present the right content at the right time to advance the user to the next step, whether that’s deeper research, a sales inquiry, or successful onboarding. This isn’t about creating more content. It’s about orchestrating a connected, intent-responsive experience that nurtures buyers across every phase of the journey. That’s the shift from SEO-as-tactic to SEO-as-infrastructure. When treated as infrastructure, SEO provides a high-leverage system that reveals market opportunities, drives persistent visibility, and reduces acquisition costs over time.

Done right, SEO delivers:

  • Scalable, evergreen visibility across product lines and geographies.
  • Lower marginal acquisition costs as rankings compound.
  • Faster adaptation to evolving user needs and market trends.
  • Systemic alignment between product, content, and experience.

Just like investing in cloud infrastructure enables engineering agility, investing in SEO infrastructure enables commercial agility, giving product, marketing, and sales teams the insight and systems to execute faster and smarter. I believe AI search results will act as a system-wide health check: It reveals messaging gaps, content blind spots, unclear product positioning, and even operational issues that frustrate customers. It’s the clearest signal you’ll ever get about what customers want and whether you’re delivering.

And as digital maturity rises, functions once seen as tactical, like SEO, are now key contributors to:

  • Operational leverage.
  • Customer acquisition.
  • Digital product-market fit.
  • Margin protection at scale.

Technical infrastructure is a key enabler of this shift. Sites that embed SEO principles into their content management system (CMS), development workflows, and indexing architecture aren’t just faster, they’re more findable, interpretable, and durable in an AI-shaped ecosystem. It’s the technical foundation that powers business visibility.

SEO is no longer just about rankings. It’s:

  • A lens into unmet customer demand.
  • A framework for reducing acquisition costs.
  • A lever for improving digital experiences.
  • A driver of compounding traffic and long-term growth.

This mirrors the broader theme in “Closing the Digital Performance Gap” – where we argue that digital systems like SEO must be treated as capital investments, not just marketing tactics. When commissioned correctly, SEO becomes an accelerant, not a dependency. Without that mindset shift at the executive level, web performance remains fragmented.

But Isn’t SEO Dead? Let’s Clear That Up

Yes, zero-click results are rising, especially for simple facts and generic queries. But that’s not where business growth happens. Most high-value customer journeys, especially in B2B, enterprise, or considered purchases, don’t end with a snippet. They involve exploration, comparison, and validation. They require depth. They demand trust. And they often result in a click. This is even more critical with AI search providing richer information.

The users who do click after scanning AI results are often more intent-driven, more informed, and further along in the buying process. That makes it more critical – not less – to ensure your site is structured to show up, be interpreted correctly, and deliver value when it matters most. SEO isn’t dead. Lazy SEO is. The fundamentals haven’t changed: Show up when it matters, deliver what people need, and reduce friction at every touchpoint. That’s not going away – no matter how AI evolves.

Final Thought

In “From Line Item to Leverage,” we made the case that digital infrastructure, when aligned to strategy, drives measurable shareholder impact. SEO is a prime example: It compounds over time, improves capital efficiency, and scales without inflating costs. To win in today’s environment, SEO must be commissioned like infrastructure: planned early, engineered with purpose, and connected to business strategy. Because the most significant growth levers are rarely flashy – they’re usually buried under decades of organizational neglect, waiting to be unlocked as a competitive advantage.

To achieve this, organizations must move beyond silos and recognize the chain reaction between searcher needs and business outcomes. That means understanding what potential customers want, ensuring that content exists in the correct format and mode, and making it discoverable and indexable.

Search marketing can be a cost-effective tactic for removing friction by matching sales and marketing content precisely with the needs of the person seeking it. In today’s AI-first environment, search becomes even more vital. It’s your early detection system for what customers care about – and the most capital-efficient lever you have to meet them there.

More Resources:


Featured Image: Master1305/Shutterstock

Why Are Brands Rethinking Their Approach To Using Agencies?

Many brands are re-evaluating their relationships with agencies.

In a 2024 report, 40% of companies surveyed said they were likely to switch from their primary agency in the next six months. Although down 15% year-on-year, it’s 33% up from 2021 and a signal that traditional client-agency models are under pressure.

Tightening budgets, the rise of in-house marketing teams, and evolving expectations are all driving brands to rethink how they use agencies.

Below is a breakdown of why this change is happening and what it means for both brands and agencies.

The Continued Rise Of In-Housing

Millions of businesses are bringing their once “agency only” services in-house and building specialist teams that handle strategic tasks like brand strategy, creative, or media planning.

Many brands no longer rely on external agencies for core marketing strategy; they’re developing that expertise internally.

So, why is this, and why are brands that saw agencies as a necessity now not needing this specialist arm of their business?

There’s a laundry list of reasons that are unique to each business, such as control, speed, efficiency, and expertise. However, the No. 1 driver is cost, with 83% of brands citing cost efficiency as the key reason for expanding their in-house teams.

If we think back over the last 30 years, the typical agency model hasn’t really changed. Agencies price based on several factors (percentage of media spend, deliverables, etc.) and then, once the business is won, apportion the fee downwards to be as efficient as possible. Marketing Week backs this up with brands stating that a big frustration is meeting a team during pitch, and a different, lesser-qualified team post-pitch.

What this means for brands is that they don’t receive the expertise promised, and may only get a small percentage of their fee allocated to the specialist, experienced teams, the ones who were involved in the pitch but then took a back seat.

In addition to this, a survey by WFA and The Observatory International found that motivators to move away from agency support included the desire for more agile processes (76%), better integration with the brand (59%), and deeper internal brand knowledge (59%).

So, has this move in-house impacted quality with specialist agencies out of the picture? Far from it. In many cases, it has improved with 86% of brands saying they’re satisfied with their in-house teams’ output, with one-third being completely satisfied, up sharply from 23% in 2020.

From an agency side, this may seem like doom and gloom. However, most brands still use agencies for certain needs, and it’s this shift in needs that offers a huge opportunity for specialist agencies to partner with brands to drive real change rather than being brought on to coast a service such as PPC or SEO along for 12 months.

Demand For Greater Value And Actual ROI

Another reason brands are rethinking agency relationships is a heightened focus on value and performance.

With economic uncertainty, marketing budgets under strain, and click costs increasing year-over-year, every dollar spent on an agency must be justified.

The Setup survey found that dissatisfaction with value was the No. 1 reason clients ended an agency relationship, moving up from No. 2 the year before, which was dissatisfaction with strategic approach.

Brands feel they’re not getting enough bang for their buck with agencies still bringing strong ideas to the table, but often charging too much to justify the return, and the actual execution of these ideas being few and far between, often getting forgotten about with the day-to-day management of accounts.

Budget pressures from the C-suite are further intensifying this scrutiny, and when CEOs and CFOs demand leaner spending, CMOs often look to agency fees as a place to trim the fat, and if there’s no wiggle room from the agency, more often than not, marketing teams will be directed to find a cheaper supplier.

Gartner’s CMO survey confirmed this, stating that 39% of leaders often look to agency fees as a place to trim the fat when looking to lean up.

When you take this, combined with the frustrations around agility, efficiency, integration, and most importantly, results, it starts to build a picture of why brands are cutting agencies more every year.

The days of big retainers with murky results are numbered, and brands are quicker to pull the plug if results and value aren’t there.

For agencies, the need to prove worth continuously has never been so important.

Speaking up, having a voice, sharing expertise, and challenging the clients they work for – a cliche that has long been part of the agency world, but rarely acted on.

Take paid media agencies, for example. They need to be digging deeper and having conversations around profitability, return rates, and LTV over short-term metrics that the marketing team doesn’t need regurgitating to them.

These conversations are the seeds that grow into established, long-term relationships, and the agencies with proven expertise will still win, as they will be proving their worth every single day and making themselves irreplaceable.

Fragmented Partnerships, Project-Based Work, And The Shortening Of Terms

Along with seeking more value, brands are changing how they engage agencies.

The classic model of a single agency handling all aspects of marketing has been fading (outside of the big six).

Many advertisers now maintain multiple agency relationships for different needs, for example, one shop for creative, another for media buying, others for SEO, social, PR, etc.

This fragmentation means the primary agency’s role is smaller than it was in the past, arguably making the role of a “sole lead agency” not as relevant this year as it once was.

If a brand can use a small specialized agency/consultancy for paid media where they can guarantee they will have an experienced account lead actually doing the work, then another for SEO, content, etc, they may not even need to invest the full budget yet get a better quality of work and a closer relationship with the partner they work with.

Leaning into this is the growth in project-based work vs. the traditional long-term retainer.

Instead of paying a single agency monthly to be on call for all needs, brands are bringing in agencies for defined projects or campaigns, essentially “auditioning” agencies through short-term work with/without a goal to find a long-term partner.

As industry veteran Avi Dan noted, “This shift from AOR to project-by-project is one of the most disruptive trends in the agency landscape.”

It gives clients more flexibility to test different partners and skills while pressuring agencies to perform in each project or risk not getting the next one.

There are pros and cons to both sides: A clear con being the management of multiple agency partners (and periodic Request For Proposal (RFPs) for new projects) being time-consuming, and another with brands risking losing the deep brand knowledge that a long-term agency partner accumulates.

Ultimately, the requirements depend on many factors specific to each brand, and the shift towards more fluid and experimental relationships with agencies allows for a more hand-picked approach to finding partners as and when they are needed, often with the underlying goal of finding a truly great agency to build a relationship with in the long-term.

Evolving Expectations Of Agency Partners

Beyond structural changes, brands’ expectations of their agencies have evolved.

It’s no longer enough for an agency to simply execute campaigns; clients want a true strategic partner.

According to the 2023 Setup survey, “chemistry” is the No. 1 factor clients look for when hiring an agency partner.

As much as a flashy portfolio or specialized expertise might be second to none, marketers are prioritizing cultural fit, communication, and a genuine connection as key pillars that are essential to execute great work.

Clear communication is absolutely key: no sugarcoating, hiding behind numbers, or excuses.

Straight talking, respectful and honest, three features that may not be the first three to spring to mind if brands were asked about their experience with agencies, but three that are critical.

This is a change, and a good change. One that gets spoken about in pitches and in my experience, is rarely executed as agencies fear rocking the boat when things are not going to plan, which in reality is the best time to speak honestly with clients.

Leaning into this is the need for agencies to build a deeper business understanding of the industry, business model, and goals, and not just ticking boxes and managing accounts with a short-term view that doesn’t break the boundaries of forecasts and KPIs.

I’ve spoken with many brands that have a sour taste for agencies after paying enormous fees to have junior teams managing their paid media campaigns.

Situations where 100% of their resource were invested in the day-to-day account management with no thought for measurement, buying, forecasting, attribution, CRO, modeling, etc, and this is on some of the highest spending accounts in the world.

Agencies that collaborate well with in-house marketers, respecting processes, share knowledge, and complement internal capabilities are much more likely to retain their client relationships and build upon their client base with this ethos at the heart of everything.

The Bottom Line

Brands are rethinking their approach to agencies because the marketing landscape demands it.

Faster turnarounds, tighter budgets, and more data-driven decision-making favor a model where brands take more control.

By building in-house teams, choosing smaller, more specialized partners, and holding agencies to higher performance standards, companies aim to achieve better agility and return on investment (ROI).

This doesn’t mean agencies are obsolete, far from it; it means the traditional agency model has changed.

What it means is that agencies must adapt to serve a new role: specialized, high-value partners who augment a brand’s own capabilities.

Brands want transparency and are tired of paying for cookie-cutter approaches to media buying, SEO, PR, and more, riding a contract out, and then moving to the next one.

Agencies need to prove the impact, dig deeper, and lead with accountability.

Now is the time for brands to define the value they want, and agencies to prove they can deliver it.

Done right, this creates leaner, smarter, and more productive partnerships that cut through noise and deliver outcomes that matter.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Why Some Brands Win in AI Overviews While Others Get Ignored [Webinar] via @sejournal, @hethr_campbell

Turn Reviews Into Real Visibility, Trust, and Conversions

Reviews are no longer just stars on a page. They are key trust signals that influence both humans and AI. With AI increasingly shaping which brands consumers trust, it is critical to know the review tactics that drive visibility, loyalty, and ROI.

Join our November 5, 2025 webinar to get a research-backed playbook that turns reviews and AI into measurable gains in search visibility, conversions, and credibility.

What You Will Learn

  • How trust signals like recency, authenticity, and response style influence rankings and conversions.
  • Where consumers are reading, leaving, and acting on reviews across Google, social media, and other platforms.
  • Proven frameworks for responding to reviews that build credibility, mitigate risks, and increase loyalty.

Why You Cannot Miss This Webinar

Based on a study of over 1,000 U.S. consumers, this session translates those insights into actionable frameworks to prove ROI, protect reputation, and strengthen client retention.

Register now to learn the latest AI and review tactics that help your brand get chosen and trusted.

🛑 Can’t make it live? Sign up anyway, and we will send you the on-demand recording.

Engineering better care

Every Monday, more than a hundred members of Giovanni Traverso’s Laboratory for Translational Engineering (L4TE) fill a large classroom at Brigham and Women’s Hospital for their weekly lab meeting. With a social hour, food for everyone, and updates across disciplines from mechanical engineering to veterinary science, it’s a place where a stem cell biologist might weigh in on a mechanical design, or an electrical engineer might spot a flaw in a drug delivery mechanism. And it’s a place where everyone is united by the same goal: engineering new ways to deliver medicines and monitor the body to improve patient care.

Traverso’s weekly meetings bring together a mix of expertise that lab members say is unusual even in the most collaborative research spaces. But his lab—which includes its own veterinarian and a dedicated in vivo team—isn’t built like most. As an associate professor at MIT, a gastroenterologist at Brigham and Women’s, and an associate member of the Broad Institute, Traverso leads a sprawling research group that spans institutions, disciplines, and floors of lab space at MIT and beyond. 

For a lab of this size—spread across MIT, the Broad, the Brigham, the Koch Institute, and The Engine—it feels remarkably personal. Traverso, who holds the Karl Van Tassel (1925) Career Development Professorship, is known for greeting every member by name and scheduling one-on-one meetings every two or three weeks, creating a sense of trust and connection that permeates the lab.

That trust is essential for a team built on radical interdisciplinarity. L4TE brings together mechanical and electrical engineers, biologists, physicians, and veterinarians in a uniquely structured lab with specialized “cores” such as fabrication, bioanalytics, and in vivo teams. The setup means a researcher can move seamlessly from developing a biological formulation to collaborating with engineers to figure out the best way to deliver it—without leaving the lab’s ecosystem. It’s a culture where everyone’s expertise is valued, people pitch in across disciplines, and projects aim squarely at the lab’s central goal: creating medical technologies that not only work in theory but survive the long, unpredictable journey to the patient.

“At the core of what we do is really thinking about the patient, the person, and how we can help make their life better,” Traverso says.

Helping patients ASAP

Traverso’s team has developed a suite of novel technologies: a star-shaped capsule that unfolds in the stomach and delivers drugs for days or weeks; a vibrating pill that mimics the feeling of fullness; the technology behind a once-a-week antipsychotic tablet that has completed phase III clinical trials. (See “Designing devices for real-world care,” below.) Traverso has cofounded 11 startups to carry such innovations out of the lab and into the world, each tailored to the technology and patient population it serves.

But the products are only part of the story. What distinguishes Traverso’s approach is the way those products are conceived and built. In many research groups, initial discoveries are developed into early prototypes and then passed on to other teams—sometimes in industry, sometimes in clinical settings—for more advanced testing and eventual commercialization. Traverso’s lab typically links those steps into one continuous system, blending invention, prototyping, testing, iteration, and clinical feedback as the work of a single interdisciplinary team. Engineers sit shoulder to shoulder with physicians, materials scientists with microbiologists. On any given day, a researcher might start the morning discussing an animal study with a veterinarian, spend the afternoon refining a mechanical design, and close the day in a meeting with a regulatory expert. The setup collapses months of back-and-forth between separate teams into the collaborative environment of L4TE.

“This is a lab where if you want to learn something, you can learn everything if you want,” says Troy Ziliang Kang, one of the research scientists. 

In a field where translating scientific ideas into practical applications can take years (or stall indefinitely), Traverso has built a culture designed to shorten that path.

The range of problems the lab tackles reflects its interdisciplinary openness. One recent project aimed to replace invasive contraceptive devices such as vaginal rings with a biodegradable injectable that begins as a liquid, solidifies inside the body, and dissolves safely over time. 

Another project addresses the challenge of delivering drugs directly to the gut, bypassing the mucus barrier that blocks many treatments. For Kang, whose grandfather died of gastric cancer, the work is personal. He’s developing devices that combine traditional drugs with electroceuticals—therapies that use electrical stimulation to influence cells or tissues.

“What I’m trying to do is find a mechanical approach, trying to see if we can really, through physical and mechanical approaches, break through those barriers and to deliver the electroceuticals and drugs to the gut,” he says.

In a field where the process of translating scientific ideas into practical applications can take years (or stall indefinitely), Traverso, 49, has built a culture designed to shorten that path. Researchers focus on designing devices with the clinical relevance to help people in the near term.  And they don’t wait for outsiders to take an idea forward. They often initiate collaborations with entrepreneurs, investors, and partners to create startups or push projects directly into early trials—or even just do it themselves. The projects in the L4TE Lab are ambitious, but the aim is simple: Solve problems that matter and build the tools to make those solutions real.

Nabil Shalabi, an instructor in medicine at Harvard/BWH, an associate scientist at the Broad Institute, and a research affiliate in Traverso’s lab, sums up the attitude succinctly: “I would say this lab is really about one thing, and it’s about helping people.”

The physician-inventor

Traverso’s path into medicine and engineering began far from the hospitals and labs where he works today. Born in Cambridge, England, he moved with his family to Peru when he was still young. His father had grown up there in a family with Italian roots; his mother came from Nicaragua. He spent most of his childhood in Lima before political turmoil in Peru led his family to relocate to Toronto when he was 14.

In high school, after finishing most of his course requirements early, he followed the advice of a chemistry teacher and joined a co-op program that would give him a glimpse of some career options. That decision brought him to a genetics lab at the Toronto Hospital for Sick Children, where he spent his afternoons helping map chromosome 7 and learning molecular techniques like PCR.

“In high school, and even before that, I always enjoyed science,” Traverso says.

After class, he’d ride the subway downtown and step into a world of hands-on science, working alongside graduate students in the early days of genomics.

“I really fell in love with the day-to-day, the process, and how one goes about asking a question and then trying to answer that question experimentally,” he says.

By the time he finished high school, he had already begun to see how science and medicine could intersect. He began an undergraduate medical program at Cambridge University, but during his second year, he reached out to the cancer biologist Bert Vogelstein and joined his lab at Johns Hopkins for the summer. The work resonated. By the end of the internship, Vogelstein asked if he’d consider staying to pursue a PhD. Traverso agreed, pausing his medical training after earning an undergraduate degree in medical sciences and genetics, and moved to Baltimore to begin a doctorate in molecular biology.

As a PhD student, he focused on the early detection of colon cancer, developing a method to identify mutations in stool samples—a concept later licensed by Exact Sciences and used in what is now known as the Cologuard test. After completing his PhD (and earning a spot on Technology Review’s 2003 TR35 list of promising young innovators for that work), he returned to Cambridge to finish medical school and spent the next three years in the UK, including a year as a house officer (the equivalent of a clinical intern in the US).

Traverso chose to pursue clinical training alongside research because he believed each would make the other stronger. “I felt that having the knowledge would help inform future research development,” he says.

inset image of a hand holding a capsule; main image the hand is holding a star shaped object
An ingestible drug-releasing capsule about the size of a multivitamin expands into a star shape once inside the patient’s stomach.
JARED LEEDS

So in 2007, as Traverso began a residency in internal medicine at Brigham and Women’s, he also approached MIT, where he reached out to Institute Professor Robert Langer, ScD ’74. Though Traverso didn’t have a background in Langer’s field of chemical engineering, he saw the value of pairing clinical insight with the materials science research happening in the professor’s lab, which develops polymers, nanoparticles, and other novel materials to tackle biomedical challenges such as delivering drugs precisely to diseased tissue or providing long-term treatment through implanted devices. Langer welcomed him into the group as a postdoctoral fellow.

In Langer’s lab, he found a place where clinical problems sparked engineering solutions, and where those solutions were designed with the patient in mind from the outset. Many of Traverso’s ideas came directly from his work in the hospital: Could medications be delivered in ways that make it easier for patients to take them consistently? Could a drug be redesigned so it wouldn’t require refrigeration in a rural clinic? And caring for a patient who’d swallowed shards of glass that ultimately passed without injury led Traverso to recognize the GI tract’s tolerance for sharp objects, inspiring his work on the microneedle pill.

“A lot of what we do and think about is: How do we make it easier for people to receive therapy for conditions that they may be suffering from?” Traverso says. How can they “really maximize health, whether it be by nutrient enhancement or by helping women have control over their fertility?” 

If the lab sometimes runs like a startup incubator, its founder still thinks like a physician.

Scaling up to help more people

Traverso has cofounded multiple companies to help commercialize his group’s inventions. Some target global health challenges, like developing more sustainable personal protective equipment (PPE) for health-care workers. Others take on chronic conditions that require constant dosing—HIV, schizophrenia, diabetes—by developing long-­acting oral or injectable therapies.

From the outset, materials, dimensions, and mechanisms are chosen for more than just performance in the lab. The researchers also consider the realities of regulation, manufacturing constraints, and safe use in patients.

“We definitely want to be designing these devices to be made of safe materials or [at a] safe size,” says James McRae, SM ’22, PhD ’25. “We think about these regulatory constraints that could come up in a company setting pretty early in our research process.” As part of his PhD work with Traverso, McRae created a “swallow-­and-forget” health-tracking capsule that can stay in the stomach for months—and it doesn’t require surgery to install, as an implant would. The capsule measures tiny shifts in stomach temperature that happen whenever a person eats or drinks, providing a continuous record of eating patterns that’s far more reliable than what external devices or self-reporting can capture. The technology could offer new insight into how drugs such as Ozempic and other GLP-1 therapies change behavior—something that has been notoriously hard to monitor. From “day one,” McRae made sure to involve external companies and regulatory consultants for future human testing.

Traverso describes the lab’s work as a “continuum,” likening research projects to children who are born, nurtured, and eventually sent into the world to thrive and help people.

Traverso and his team developed a device that can adhere to soft, wet surfaces. The design was inspired by studies of a sucker fish that attaches to sharks and other marine animals.
COURTESY OF THE RESEARCHERS

For lab employee Matt Murphy, a mechanical engineer who manages one of the main mechanical fabrication spaces, that approach is part of the draw. Having worked with researchers on projects spanning multiple disciplines—mechanical engineering, electronics, materials science, biology—he’s now preparing to spin out a company with one of Traverso’s postdocs. 

“I feel like I got the PhD experience just working here for four years and being involved in health projects,” he says. “This has been an amazing opportunity to really see the first stages of company formation and how the early research really drives the commercialization of new technology.”

The lab’s specialized “cores” ensure that projects have consistent support and can draw on plenty of expertise, regardless of how many students or postdocs come and go. If a challenge arises in an area in which a lab member has limited knowledge, chances are someone else in the lab has that background and will gladly help. “The culture is so collaborative that everybody wants to teach everybody,” says Murphy.

Creating opportunities 

In Traverso’s lab, members are empowered to pursue technically demanding research because the culture he created encourages them to stretch into new disciplines, take ownership of projects, and imagine where their work might go next. For some, that means cofounding a company. For others, it means leaving with the skills and network to shape their next big idea.

“He gives you both the agency and the support,” says Isaac Tucker, an L4TE postdoc based at the Broad Institute. “Gio trusts the leads in his lab to just execute on tasks.” McRae adds that Traverso is adept at identifying “pain points” in research and providing the necessary resources to remove barriers, which helps projects advance efficiently. 

A project led by Kimberley Biggs, another L4TE postdoc, captures how the lab approaches high-stakes problems. Funded by the Gates Foundation, Biggs is developing a way to stabilize therapeutic bacteria used for neonatal and women’s health treatments so they remain effective without refrigeration—critical for patients in areas without reliable temperature-controlled supply chains. A biochemist by training, she had never worked on devices before joining the lab, but she collaborated closely with the mechanical fabrication team to embed her bacterial therapy for conditions such as bacterial vaginosis and recurrent urinary tract infections into an intravaginal ring that can release it over time. She says Traverso gave her “an incredible amount of trust” to lead the project from the start but continued to touch base often, making sure there were “no significant bottlenecks” and that she was meeting all the goals she wanted to meet to progress in her career.

Traverso encourages collaboration by putting together project teams that combine engineers, physicians, and scientists from other fields—a strategy he says can be transformative. 

“If you only have one expert, they are constrained to what they know,” he explains. But “when you bring an electrical engineer together with a biologist or physician, the way that they’ll be able to see the problem or the challenge is very different.” As a result, “you see things that perhaps you hadn’t even considered were possible,” he says. Moving a project from a concept to a successful clinical trial “takes a village,” he adds. It’s a “complex, multi-step, multi-person, multi-year” process involving “tens if not hundreds of millions of dollars’ worth of effort.”

Good ideas deserve to be tested

The portion of Traverso’s lab housed at the “tough tech” incubator The Engine—and the only academic group working there—occupies a 30-bench private lab alongside shared fabrication spaces, heavy machinery, and communal rooms of specialized lab equipment. The combination of dedicated and shared resources has helped reduce some initial equipment expenses for new projects, while the startup-dense environment puts potential collaborators, venture capital, and commercialization pathways within easy reach. Biggs’s work on bacterial treatments is one of the lab’s projects at The Engine. Others include work to develop electronics for capsule-based devices and an applicator for microneedle patches.

Traverso’s philosophy is to “fail well and fail fast and move on.”

The end of one table houses “blue sky” research on a topic of long-standing interest to Traverso: pasta. Led by PhD student Jack Chen, the multi-pronged project includes using generative AI to help design new pasta shapes with superior sauce adhesion. Chen and collaborators ranging from executive chefs to experts in fluid dynamics apply the same analytical rigor to this research that they bring to medical devices. It’s playful work, but it’s also a microcosm of the lab’s culture: interdisciplinary to its core, unafraid to cross boundaries, and grounded in Traverso’s belief that good ideas deserve to be tested—even if they fail.

“I’d say the majority of things that I’ve ever been involved in failed,” he says. “But I think it depends on how you define failure.” He says that most of the projects he worked on for the first year and a half of his own PhD either just “kind of worked” or didn’t work at all—causing him to step back and take a different approach that ultimately led him to develop the highly effective technique now used in the Cologuard test. “Even if a hypothesis that we had didn’t work out, or didn’t work out as we thought it might, the process itself, I think, is valuable,” he says. So his philosophy is to “fail well and fail fast and move on.”

hand holding a spherical metal object
A tiny capsule that delivers a burst of medication directly into the GI tract offers an alternative to injections.
JARED LEEDS

In practice, that means encouraging students and postdocs to take on big, uncertain problems, knowing a dead end isn’t the end of their careers—just an opportunity to learn how to navigate the next challenge better.

McRae remembers when a major program—two or three years in the making—abruptly changed course after its sponsor shifted priorities. The team had been preparing a device for safety testing in humans; suddenly, the focus on that goal was gone. Rather than shelving the work, Traverso urged the group to use it as an opportunity to “be a little more creative again” and explore new directions, McRae says. That pivot sparked his work on an autonomous drug delivery system, opening lines of research the team hadn’t pursued before. In this system, patients swallow two capsules that interact in the stomach. When a sensor capsule detects an abnormal signal, it directs a second capsule to release a drug.

“He will often say, ‘I have a focus on not wasting time. Time is something that you can’t buy back. Time is something that you can’t save and bank for later.’”

Kimberley Biggs

“When things aren’t working, just make sure they didn’t work and you’re confident why they didn’t work,” Traverso says he tells his students. “Is it the biology? Is it the materials science? Is it the mechanics that aren’t just aligning for whatever reason?” He models that diagnostic mindset—and the importance of preserving momentum. 

“He will often say, ‘I have a focus on not wasting time. Time is something that you can’t buy back. Time is something that you can’t save and bank for later,’” says Biggs. “And so whenever you do encounter some sort of bottleneck, he is so supportive in trying to fix that.” 

Traverso’s teaching reflects the same interplay between invention, risk, and real-world impact. In Translational Engineering, one of his graduate-level courses at MIT, he invites experts from the FDA, hospitals, and startups to speak about the realities of bringing medical technology to the world.

“He shared his network with us,” says Murphy, who took the course while working in the lab. “Now that I’m trying to spin out a company, I can reach out to these people.” 

Although he now spends most of his time on research and teaching, Traverso maintains an inpatient practice at the Brigham, participating in the consult service—a team of gastroenterology fellows and medical students supervising patient care—for several weeks a year. Staying connected to patients keeps the problems concrete and helps guide decisions on which puzzles to tackle in the lab.

“I think there are certain puzzles in front of us, and I do gravitate to areas that have a solution that will help people in the near term,” he says.

For Traverso, the measure of success is not the complexity of the engineering but the efficacy of the result. The goal is always a therapy that works for the people who need it, wherever they are. 


Designing devices for real-world care 

A sampling of recent research from Traverso’s Lab for Translational Engineering

A mechanical adhesive device inspired by sucker fish sticks to soft, wet surfaces; it could be used to deliver drugs in the GI tract or to monitor aquatic environments. 

A pill based on Traverso’s technology that can be taken once a week gradually releases medication within the stomach. It’s designed for patients with conditions like schizophrenia, hypertension, and asthma who find it difficult to take medicine every day. 

A new delivery method for injectable drugs uses smaller needles and fewer shots. Drugs injected as a suspension of tiny crystals assemble into a “depot” under the skin that could last for months or years. 

A protein from tiny tardigrades, also known as “water bears,” could protect healthy cells from radiation damage during cancer treatments, reducing severe side effects that many patients find too difficult to tolerate. Injecting messenger RNA encoding this protein into mice produced enough to protect healthy cells.

An inflatable gastric balloon could be enlarged before a meal to prevent overeating and help people lose weight. 

Inspired by the way squid use jets to shoot ink clouds, a capsule releases a burst of drugs directly into the GI tract. It could offer an alternative to injecting drugs such as insulin, as well as vaccines and therapies to treat obesity and other metabolic disorders.

An implantable sensor could reverse opioid overdoses. Implanted under the skin, it rapidly releases naloxone when an overdose is detected.

A screening device for cervical cancer offers a clear line of sight to the cervix in a way that causes less discomfort than a traditional speculum. It’s affordable enough for use in low- and middle-income countries.

Infinite folds

When Madonna Yoder ’17 was eight years old, she learned how to fold a square piece of paper over and over and over again. After about 16 folds, she held a bird in her hands.

The first time she pulled the tail of a flapping crane, she says, she realized: Oh, I folded this, and now it’s a toy

That first piece was an origami classic, folded by kids at summer camp for generations and many people’s first foray into the art form. Often, it’s also the last. But Yoder was transfixed. Soon she was folding everything she could find: paper squares from chain craft shops, scraps from around the house, the weekly church bulletin, which she would cut into pieces with the aid of her fingernails. She would then “turn those into little critters and give them to any guests that were there that week,” she says. 

Today, perhaps millions of folds later, Yoder is a superstar known to some as the “Queen of Tessellations,” a reference to a mathematically intricate type of origami that she began exploring during her years at MIT. 

“These are patterns that can repeat infinitely and are folded on a single sheet of paper,” Yoder explains. “There’s literally no end to the patterns themselves, no end to the number of designs you can create … They’re folded by hand—I don’t know of any machine that could fold them—and they are a really great way to just sit and focus and relax.”

Her pieces have grown increasingly complex over time, but the patterns she creates are based on recognizable shapes, including hexagons, triangles, rhombuses, and trapezoids. Yoder folds and rotates them into repeating, potentially infinite series of shapes. Picture the graphic pattern in an M.C. Escher print, but made out of a single sheet of paper—a piece of art underpinned by mathematics and a bit of engineering, combined with the complexity of a snowflake. 

Yoder grew up in southwestern Virginia, in the Blue Ridge Mountain town of Shawsville, where professors from Virginia Tech filled the pews at her Mennonite church. “All of us kids were expected to go to college,” she says. After she made her way to MIT, her brother, Jake, earned his PhD in materials engineering at Virginia Tech and now works with 3D-printed metals. Her mother, Janet, is a physical therapist and her father, Denton, is a computer systems engineer at Virginia Tech.

From a young age, Yoder had an inclination for making things with her hands. “I was kind of that kid—I did all the different crafts. I did a lot of cross-stitch,” she says, including a portrait of her grandmother that now hangs framed in her kitchen. 

She also remembers an early appreciation for accuracy. “My mom tells the story about when I was five years old, we were cutting out squares, and I was like: ‘Mom, your squares are not precise enough,’” she says. 

Toward the end of her senior year of high school, Yoder won a math competition, which came with an apt prize: a book about modular origami, in which multiple sheets of paper are folded and combined into often elaborate structures. She took a gap year in Peru, where she continued to fold, giving little modular pieces away to children she met on her travels.

Yoder had always done paper folding in solitude, with guidance only from books. When she arrived at MIT after her time in Peru, she was surprised to learn about weekly origami gatherings and the annual convention held by the campus club OrigaMIT.

“It took until I got to MIT to realize that, oh, this is an active space where people are meeting up and designing things and talking to each other about origami all weekend,” she says. She majored in Earth, Atmospheric, and Planetary Sciences (EAPS), but in the spring of her senior year, she took Erik Demaine’s popular class Geometric Folding Algorithms—and discovered that “origami research was something that people got paid to do,” as she puts it. Her final project for the class became a poster presentation at the 7th International Meeting on Origami in Science, Mathematics, and Education (7OSME). “In that course, I got hooked on origami research,” Yoder says.

Demaine remembers that Yoder started to explore concepts related to tessellations in his class, which eventually led to the publication of her first paper—“Folding Triangular and Hexagonal Mazes,” coauthored with him and Jason Ku, then a lecturer at MIT. In that paper, Yoder helped demonstrate how to “generalize” a square grid maze to triangular and hexagonal grids by changing the underlying crease pattern. “We probably suggested this as an interesting open problem for people to work on, and Madonna found a really happy niche there,” says Demaine, who isn’t aware of any other former students pursuing careers in origami. “We provided the space for her to do the research, but then she went whole hog on it.” 

But she didn’t truly embrace tessellations until after she graduated and was preparing for a four-and-a-half-month MIT-sponsored internship in Israel. “These modulars have a lot of volume—I’m not going to bring back a suitcase full of them,” she remembers thinking. And she wasn’t going to leave behind four-plus months of folding work. “So I decided to teach myself to fold tessellations because they’re flat and travel well,” she says.  “Then it took root in my brain and never let me go.”

But there was the practical matter of making a living.

Origami principles have been used to conceive of and develop a wide range of things, from the tiny (think medical instruments or nanoscale devices that can deliver DNA into cells) to the large (such as collapsible structures usable in disaster response or foldable solar arrays for space exploration). Yoder figured if she wanted to pursue origami as a career, she would have to do it as a scientist or engineer.

But after reverse-engineering hundreds of origami patterns she found online—and starting to design her own—she began to suspect otherwise. “I realized it’s actually possible to make a living as an origami artist,” she remembers. “I won’t say that now, five years out from that decision, I’ve reached a point of being able to fully financially support myself with origami, but thankfully, I married a software engineer.” (She met her husband, Manny Meraz-Rodriguez, while the two were working at the Lawrence Livermore National Laboratory, she as an intern and then as a postcollege appointee in computational geoscience.)

Origami purists will say that true origami requires no cuts, no glue. The only slicing Yoder does is with a rotary cutter she uses to make hexagonal pieces of paper, stacks at a time. Though she starts with squares sometimes, the hexagon is her favored launching pad. She creases the paper into a grid, and then—­following a design that she’s created using a vector graphics program called Inkscape—begins to fold.

“The main reason why I draw the patterns out first, besides the fact that the designs have gotten too complicated for me to hold in my brain and solve on the fly, is because I like to have the pattern rotated so that the repeats of the pattern align with the edge, which you can only do if you have the information of how the repeats of the pattern line up with the background grid,” she explains. 

Using a simple tool called a bone folder (Yoder says she’s had hers for years and could pick it out of a pile by the wear pattern), she presses and creases and rotates the paper into an elaborate pattern that could, in theory, go on forever. The end result is a beautiful, satisfyingly symmetrical array of repeated, interlocking shapes that look especially impressive when held up to the light, bringing to mind a stained-glass window.

folded shape
Scroll down to learn how to fold this Dancing Ribbons tessellation created by Yoder.

Scholars debate whether the ancient tradition of origami began in Japan or China, but the art really took off globally in the 1950s and ’60s when publishers printed and mass-marketed diagrams showing people how to fold paper into figurative objects such as birds, fish, and animals. Paper tessellations have roots in Germany in the 1920s, when the artist Josef Albers added folding to his introductory design course at the Bauhaus. This geometric tradition started gaining popularity in the 1980s and 1990s, and now, Yoder says, there are perhaps tens of thousands of people who participate. The broader universe of origami practitioners likely numbers in the millions.

These aficionados attend conferences, watch YouTube videos, and take online courses, most of them to learn existing patterns. Yoder creates her own: In addition to the peer-­reviewed academic papers she’s authored on the mathematical underpinnings of her tessellations (with titles like “Symmetry Requirements and Design Equations for Origami Tessellations” and “Hybrid Hexagon Twist Interface”) and regular presentations at origami conferences across the globe, she’s designed 696 original patterns. Each year in an event she calls Advent of Tess, she teaches thousands of online participants a new design every day of December leading up to Christmas, and her website, Gathering Folds, has become a go-to source, not just for Yoder’s artwork but for instruction. 

Her EAPS degree from MIT may not seem like a foundation for a career as an artist, but Yoder, who studied geology with a secondary focus on ecology, says there are connections between the fields. “There is a lot of carryover between the crystal structures and the tessellation symmetries,” she explains. “Every repeating 2D pattern obeys one of the planar symmetry groups … There are things that repeat like a hexagon, things that repeat like a square, things that repeat like a triangle, and things that repeat like a parallelogram or rectangle. And then there are things that are not rotationally symmetric. Those ideas of how things connect and how things repeat definitely carry over from my crystallography class.”

Yoder cites the origami artist and physicist Robert Lang as one of the current practitioners who influenced her the most. He, like Yoder, has a math and science background but forged a career in art. 

“The thing that has set her above the current crowd is that she’s really systematically explored the building blocks of tessellations and the different little patterns that can be considered building blocks, and the rules for connecting these blocks,” he explains. “Madonna’s knowledge and understanding of mathematics and geometry gives her a broader tool kit to create art, and that’s led to her success as an artist. You can’t separate the art from the science background. It’s part of the thinking process, even if the end goal is very much in the fine art world.”

For Yoder, the process, both computational and tactile, is also an end in itself. It is almost a meditation—a way to slow down and contemplate. Some of her students have even suggested there might be a spiritual component to it. One said to her: “You know, the name for that connection to infinite things is called God, right?”

“So I kind of leave that more open,” she says. “I’m not super decided about what these things mean. I’m just happy to have that spark when I’m designing a pattern: Here’s how the shapes hang together, and now that I’ve drawn out those shapes, I can copy and paste, paste, paste, paste, paste, and it just clicks in very satisfyingly.”

Yoder has considered whether she will ever get bored pursuing the possibilities of infinite patterns—whether she will achieve perfection and decide to put the bone folder away for good.

“But I’m not convinced that I will,” she says. “There are always ways to make it harder and harder.”

diagram of origami pattern
example of folded pattern


Fold it yourself

Try your hand at folding Madonna Yoder’s Dancing Ribbons tessellation design featuring three closed twists: hexagon, triangle, and rhombus.

Basic instructions

1. Download the pattern here and cut out the hexagon with the crease pattern.

2. Fold all the background grid lines, making sure to fold them back and forth so the paper is ready to form the pattern. (You can precrease all the off-grid folds too, but Yoder recommends folding one twist at a time.) This pattern shows mountain folds with solid red lines and valley folds with dashed blue lines. The faded lines inside the twists are helper folds used to set up the twists; they will not be used in the final pattern.

3. Working from the side without the pattern, fold the central hexagon.

4. Fold the triangles.

5. Fold the rhombuses.

Find more detailed instructions and a video tutorialas well as paper adviceat technologyreview.com/tessellation

You can also sign up for Yoder’s annual Advent of Tessa 25-day folding challenge that begins December 1at  https://training.gatheringfolds.com/advent.

25 years of research in space

On November 2, 2000, NASA astronaut Bill Shepherd, OCE ’78, SM ’78, and Russian cosmonauts Sergei Krikalev and Yuri Gidzenko made history as their Soyuz spacecraft docked with the International Space Station. 

The event marked the start of 25 years of continuous human presence in space aboard the ISS—a prolific period for space research. MIT-trained astronauts, scientists, and engineers have played integral roles in all aspects of the station’s design, assembly, operations, and scientific research. 

One of MIT’s most experienced NASA astronauts, Mike Fincke ’89, is celebrating that milestone from space. Having already logged 381 days in three previous missions to the ISS, he returned on August 1 as a member of the Expedition 73 crew. “Wow, 25 years of constant human habitation in space!” he said when he spoke with me from the station in September. “What an accomplishment and a testimony to the teams on the ground and in terms of engineering, science, and diplomacy.” 

Building and operating the ISS

“We understood that building the ISS was significantly more difficult than anything we’d attempted before with the possible exception of Apollo,” says Pamela Melroy, SM ’84, who flew the space shuttle on three ISS assembly missions, including STS-92 in October 2000, which installed key modules and structures that prepared the station for the arrival of Shepherd and his crew less than two weeks later. “We learned a tremendous amount from the Shuttle-Mir program that I think gave us a lot more confidence going into ISS assembly,” she says.

Melroy was one of 10 MIT astronauts who participated in 13 space shuttle missions to assemble and resupply the ISS through 2011. “It’s pretty awe-inspiring to just go, ‘Wow, there is the visible evidence of what we just spent 10 to 14 days doing,’” she recalls. She also saw just how critical logistics are to resupply operations—especially since the retirement of the shuttle. 

Shepherd, who served as Expedition One commander, and his crew overcame a variety of challenges as they adapted to living in space, continued the assembly of the ISS, and installed and activated its life support and communications systems. “We were blue-collar maintenance guys for most of our flight,” he says. “I really enjoyed that part of it.” After arriving on the ISS, he discovered that the Russian service module was missing a worktable that his crew had found to be very useful in training. He asked Moscow, “Where’s our table?” and was told, “It’s going to come up six months after you guys are gone.” 

Cargo flights had delivered canisters of carbon dioxide absorbers packaged in sturdy aluminum frames. Upon inspecting the frames, they decided there was no reason to remain table-less. “We had some special tools that we had smuggled on board,” he recalls. “So we started to cut and drill and thread and fabricate a table out of scraps.” It turned out to be a pretty good table. “When Houston found out about it, they went nuts, because we were up there sawing, making chips and aluminum sawdust,” he says. “But we got through all that.” Now in the Smithsonian, it is “definitely an MIT-designed table,” Shepherd says. 

Twelve MIT alums and one MIT affiliate from the Whitehead Institute have logged a total of 18 long-duration missions to the ISS. Cady Coleman ’83 served as lead robotics and science officer during a 159-day expedition in 2010 and 2011. She performed hundreds of experiments, ranging from basic science to technology development for future moon and Mars missions. “At MIT, we were always invited to be part of scientific discovery,” Coleman says. “We carried MIT’s standard of excellence into every field. Most importantly, our education taught us that we were part of a larger mission to make the world a better place.”

Citing the “mens et manus” motto on the Brass Rat he was wearing in space, Fincke observed that MIT prepared him well for his job. “When you have such a critical mass of really intelligent people and critical thinkers, it really makes a difference and brings out the best in all of us, including me,” he said. “So thank you, MIT.”

Woody Hoburg ’08, who was an assistant professor of aero-astro before piloting a 186-day mission to the ISS in 2023, concurs: “It’s no surprise that so many exceptional MIT thinkers and doers end up shaping our boldest achievements in space. The ISS is certainly one of those—it’s a beautiful machine, constructed while I was still in high school and later studying Course 16 at MIT, flying five miles per second over Earth that whole time.”

Science in space

A wide range of MIT faculty and students have taken advantage of the ISS’s unique access to space to conduct research. 

“MIT’s MACE-II [Middeck Active Control Experiment] was the first active US scientific investigation performed on the International Space Station,” Shepherd said back in 2001. “Performing scientific investigations like MACE-II on board the station allows for successful interaction, almost in real time, between the astronauts in space and investigators on the ground.” Developed by aero-astro professor David Miller ’82, SM ’85, ScD ’88, and the Space Systems Laboratory (SSL) he then directed, MACE-II successfully tested techniques for predicting and controlling the dynamics of structures in microgravity. Miller says that the structural dynamics techniques developed through MACE were later used to test the James Webb Space Telescope.  

Miller and the SSL also led the development of SPHERES (Synchronized Position Hold Engage and Reorient Experimental Satellites), a set of satellites used on board the ISS from 2006 through 2019. Inspired by the Jedi training ball from the original Star Wars, SPHERES evolved from an undergraduate aero-astro capstone project into an ISS facility for studying the dynamic control of satellites flying together in space. Three independent free-flying satellites operated inside the ISS within an infrared/ultrasonic measurement system that provided precise positioning and attitude information in three dimensions. SPHERES let researchers develop and test algorithms for precision control of multiple spacecraft during complex collaborative operations. Its modular design permitted the addition of electromagnets for precise tandem flight, vision systems for navigation, and hardware for investigating the sloshing of fluids in space. 

Greg Chamitoff, PhD ’92, became the first principal investigator to directly perform his own scientific research on the ISS when he programmed SPHERES during Expedition 17 in 2008. Miller recalls that when Chamitoff later visited MIT, he asked, “Why don’t we create the first primary school robotics competition ever hosted off the planet?” During the next decade, nearly 20,000 high school and middle school students from around the world participated in Zero Robotics, writing algorithms to control the SPHERES satellites in STEM competitions conducted onboard the ISS. Both MACE-II and SPHERES were returned to Earth and will be on display at the National Air and Space Museum in the “At Home in Space” gallery slated to open in 2026.

Samuel C.C. Ting, the Thomas Dudley Cabot Professor of Physics at MIT, led a $2 billion international effort to develop the Alpha Magnetic Spectrometer (AMS) with the ambitious goal of searching for antimatter, determining the origin of dark matter, and understanding the properties of cosmic rays. Delivered to the ISS in 2011 by one of the final space shuttle missions, the AMS has precisely measured over 253 billion cosmic ray events with energies up to multiple tera-electron-volts. Fully interpreting the comprehensive experimental data still being generated by the AMS will require new physics models. “I would imagine 100 years from now most of my work will be forgotten,” Ting says. “But if people remember anything, it probably will be AMS.” 

Kate Rubins, a microbiologist, was a fellow at the Whitehead Institute when she was selected as a NASA astronaut in 2009—and became the first person to sequence DNA in space during her long-
duration ISS mission in 2016. She did so using a commercially available meta­genomics sequencer, despite the risk that it might not function in orbit. “To everybody’s surprise, it worked, and it worked the first time,” she recalls. “I don’t know if I’ve ever had a lab experiment in my life that has worked the first time, but genomic sequencing in space was a big one to have that happen.”

Rubins wanted to conduct her own scientific research during her spare time in orbit, so she got permission from NASA to substitute her own lab bench equipment—including pipettes, tubes, and scientific plasticware—for the small kit of personal items that astronauts are allowed to bring to space. She got a NASA psychologist to help make the case. “He said, ‘You know, Kate’s a nerd—she loves doing this stuff … we have to fly this on board for her,’” she says. Rubins successfully demonstrated that regular biology lab equipment could be used to conduct science in space—and donated that equipment for use by future ISS crews. (“Every astronaut turns into a scientist when they get on board the space station,” she says.) She recently coauthored a paper describing the creation of a microbiome map of the ISS—a 3D map showing where astronauts found various microbes and metabolites when they collected samples in space. She calls the work “super exciting.” 

The ISS also serves as a test bed for new technologies that will support NASA’s ambitious programs to explore the moon and Mars. In 2023, MIT Lincoln Laboratory successfully demonstrated high-­bandwidth laser communications in space between its ILLUMA-T laser communications terminal onboard the ISS and a NASA Laser Communications Relay Demonstration satellite. When the Artemis II astronauts launch to the moon in early 2026, their Orion spacecraft will use the optical communications system developed by Lincoln Laboratory’s Optical and Quantum Communications Group and the Goddard Space Flight Center to transmit high-­resolution imagery of the lunar surface back to Earth via lasers capable of data rates up to 260 megabits per second. 

International cooperation

One of the most enduring legacies of the International Space Station, which is slated to continue operations through 2030, is the vast scale of international cooperation that made it possible. 

The roots of the project trace back to 1984, when President Ronald Reagan challenged NASA to lead an effort to build an Earth-orbiting space station within a decade. But by the early 1990s, the Space Station Freedom was significantly over budget and behind schedule. Shortly after taking office in 1993, President Bill Clinton asked MIT President Charles Vest to lead the Advisory Committee on the Redesign of the Space Station. In the wake of the Soviet Union’s collapse, the Vest committee recommended that “NASA and the Administration further pursue opportunities for cooperation with the Russians as a means to enhance the capability of the station, reduce cost, provide alternative access to the station, and increase research opportunities.” That led NASA to invite the Russian space agency Roscosmos to join an international ISS coalition. And today, the ISS is operated cooperatively by the space agencies of the United States (NASA), Russia (Roscosmos), Japan (JAXA), Canada (CSA), and Europe (ESA). 

Bill Shepherd, OCE ’78, SM ’78, and his crewmates built this worktable in space using tools they’d smuggled on board. They inscribed “The Best from Nothing” in Latin on its side.
COURTESY OF BILL SHEPARD

“We went from a space race during the Apollo time frame to—actually now we work together, humans across planet Earth, making something pretty incredible,” Fincke says. “Hats off to all of my crewmates and to all of the teams across planet Earth that put this beautiful space station together.”  

As deputy administrator of NASA from 2021 to 2025, Melroy helped lead NASA during a challenging period following the Russian invasion of Ukraine. “When people are united by something that they’re equally passionate about,” she says, “you overcome the barriers of cultural, language, political differences.” NASA and Roscosmos had established a “level of trust,” she says, “and there are relationships at every single level.” Keeping relationships nonpolitical was a guiding principle, Melroy says, “and our Russian partners respected that and agreed.”

“We still have our partnership in space even though on the ground we’re not quite getting along,” Fincke says. “We have a beautiful solar system to go explore, and someday we’re gonna have the stars.” And that, he says, will be possible “if we stop fighting and put our efforts toward exploration.”

In 2001 Shepherd predicted, “It’s very likely that the day of our launch … will be the last day that humans will live only on planet Earth.” And after 25 years of living and working on the International Space Station, humans appear to be up to the challenge of proving him right.

John Tylko ’79, PhD ’23, an aerospace engineer and technology historian, witnessed the 2000 launch of the first ISS crew at the Baikonur Cosmodrome and the docking of their spacecraft with the ISS from the Russian Mission Control Center near Moscow. 


Michael Fincke floating on the ISS
Expedition 73 astronaut Michael Fincke ’89 inside the European Columbus laboratory module of the International Space Station in August 2025. While being interviewed from the ISS in September, Fincke said that MIT prepared him well for his time in space, from the aero-astro classes that taught him about airplanes and rockets—and critical thinking—to his Russian language and EAPS classes. “When you have such a critical mass of really intelligent people and critical thinkers, it really makes a difference and brings out the best in all of us, including me,” he said. “So thank you, MIT.”
NASA
Astronaut Woody Hoburg ’08 conducts a spacewalk outside the International Space Station to deploy new solar arrays during Expedition 68 on June 9, 2023.
NASA
Expedition 64 astronaut Kate Rubins, a Whitehead Fellow, with the DNA sequencing experiment she ran aboard the ISS on January 22, 2021. Rubins was first astronaut to sequence DNA in space during Expedition 48 in 2016.
NASA
Mike Fincke ’89, Cady Coleman ’83, and Greg Chamitoff, PhD ’92, made a video to offer extraterrestrial congratulations on the Institute’s 150th anniversary while they were all aboard the ISS in 2011. In this still from the video, they’re seen with the three SPHERES satellites developed by MIT’s Space Systems Laboratory.
NASA
Samuel C.C. Ting, the Thomas Dudley Cabot Professor of Physics at MIT, with a model of the Alpha Magnetic Spectrometer (AMS) at a Kennedy Space Center news conference on April 28, 2011.
JOHN TYLKO
Expedition 18 astronauts Greg Chamitoff, PhD ’92 (left) and Mike Fincke ’89 (center) with spaceflight participant Richard Garriott on October 22, 2008, in the ISS Harmony node with the three SPHERES satellites developed at MIT.
NASA
In September 2000, Aero-Astro Space Systems Laboratory researchers posed with MIT’s MACE-II (Middeck Active Control Experiment), the first active US scientific investigation performed on the ISS. Left to right: Cemocan Yesil ’03, Professor David Miller ’82, SM ’85, ScD ’88, Gregory Mallory, PhD ’00, and Jeremy Yung ’93, SM ’96, PhD ’02.
DONNA COVENEY