Make AI Writing Work for Your Content & SERP Visibility Strategy [Webinar] via @sejournal, @hethr_campbell

Are your AI writing tools helping or hurting your SEO performance?

Join Nadege Chaffaut and Crystie Bowe from Conductor on September 17, 2025, for a practical webinar on creating AI-informed content that ranks and builds trust.

You’ll Learn How To:

  • Engineer prompts that produce high-quality content
  • Keep your SEO visibility and credibility intact at scale
  • Build authorship and expertise into AI content workflows

Why You Can’t Miss This Session

AI can be a competitive advantage when used the right way. This webinar will give you the frameworks and tactics to scale content that actually performs.

Register Now

Sign up to get actionable strategies for AI content. Can’t make it live? Register anyway, and we’ll send you the full recording.

Therapists are secretly using ChatGPT. Clients are triggered.

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen.

“Suddenly, I was watching him use ChatGPT,” says Declan, 31, who lives in Los Angeles. “He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers.”

Declan was so shocked he didn’t say anything, and for the rest of the session he was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen. The session became even more surreal when Declan began echoing ChatGPT in his own responses, preempting his therapist. 

“I became the best patient ever,” he says, “because ChatGPT would be like, ‘Well, do you consider that your way of thinking might be a little too black and white?’ And I would be like, ‘Huh, you know, I think my way of thinking might be too black and white,’ and [my therapist would] be like, ‘Exactly.’ I’m sure it was his dream session.”

Among the questions racing through Declan’s mind was, “Is this legal?” When Declan raised the incident with his therapist at the next session—“It was super awkward, like a weird breakup”—the therapist cried. He explained he had felt they’d hit a wall and had begun looking for answers elsewhere. “I was still charged for that session,” Declan says, laughing.

The large language model (LLM) boom of the past few years has had unexpected ramifications for the field of psychotherapy, mostly due to the growing number of people substituting the likes of ChatGPT for human therapists. But less discussed is how some therapists themselves are integrating AI into their practice. As in many other professions, generative AI promises tantalizing efficiency savings, but its adoption risks compromising sensitive patient data and undermining a relationship in which trust is paramount.

Suspicious sentiments

Declan is not alone, as I can attest from personal experience. When I received a recent email from my therapist that seemed longer and more polished than usual, I initially felt heartened. It seemed to convey a kind, validating message, and its length made me feel that she’d taken the time to reflect on all of the points in my (rather sensitive) email.

On closer inspection, though, her email seemed a little strange. It was in a new font, and the text displayed several AI “tells,” including liberal use of the Americanized em dash (we’re both from the UK), the signature impersonal style, and the habit of addressing each point made in the original email line by line.

My positive feelings quickly drained away, to be replaced by disappointment and mistrust, once I realized ChatGPT likely had a hand in drafting the message—which my therapist confirmed when I asked her.

Despite her assurance that she simply dictates longer emails using AI, I still felt uncertainty over the extent to which she, as opposed to the bot, was responsible for the sentiments expressed. I also couldn’t entirely shake the suspicion that she might have pasted my highly personal email wholesale into ChatGPT.

When I took to the internet to see whether others had had similar experiences, I found plenty of examples of people receiving what they suspected were AI-generated communiqués from their therapists. Many, including Declan, had taken to Reddit to solicit emotional support and advice.

So had Hope, 25, who lives on the east coast of the US, and had direct-messaged her therapist about the death of her dog. She soon received a message back. It would have been consoling and thoughtful—expressing how hard it must be “not having him by your side right now”—were it not for the reference to the AI prompt accidentally preserved at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.”

Hope says she felt “honestly really surprised and confused.” “It was just a very strange feeling,” she says. “Then I started to feel kind of betrayed. … It definitely affected my trust in her.” This was especially problematic, she adds, because “part of why I was seeing her was for my trust issues.”

Hope had believed her therapist to be competent and empathetic, and therefore “never would have suspected her to feel the need to use AI.” Her therapist was apologetic when confronted, and she explained that because she’d never had a pet herself, she’d turned to AI for help expressing the appropriate sentiment. 

A disclosure dilemma 

Betrayal or not, there may be some merit to the argument that AI could help therapists better communicate with their clients. A 2025 study published in PLOS Mental Health asked therapists to use ChatGPT to respond to vignettes describing problems of the kind patients might raise in therapy. Not only was a panel of 830 participants unable to distinguish between the human and AI responses, but AI responses were rated as conforming better to therapeutic best practice. 

However, when participants suspected responses to have been written by ChatGPT, they ranked them lower. (Responses written by ChatGPT but misattributed to therapists received the highest ratings overall.) 

Similarly, Cornell University researchers found in a 2023 study that AI-generated messages can increase feelings of closeness and cooperation between interlocutors, but only if the recipient remains oblivious to the role of AI. The mere suspicion of its use was found to rapidly sour goodwill.

“People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, a clinical psychologist and professor at the University of California, Berkeley. “I think [using AI] can feel like, ‘You’re not taking my relationship seriously.’ Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine.”

In 2023, in the early days of generative AI, the online therapy service Koko conducted a clandestine experiment on its users, mixing in responses generated by GPT-3 with ones drafted by humans. They discovered that users tended to rate the AI-generated responses more positively. The revelation that users had unwittingly been experimented on, however, sparked outrage.

The online therapy provider BetterHelp has also been subject to claims that its therapists have used AI to draft responses. In a Medium post, photographer Brendan Keen said his BetterHelp therapist admitted to using AI in their replies, leading to “an acute sense of betrayal” and persistent worry, despite reassurances, that his data privacy had been breached. He ended the relationship thereafter. 

A BetterHelp spokesperson told us the company “prohibits therapists from disclosing any member’s personal or health information to third-party artificial intelligence, or using AI to craft messages to members to the extent it might directly or indirectly have the potential to identify someone.”

All these examples relate to undisclosed AI usage. Aguilera believes time-strapped therapists can make use of LLMs, but transparency is essential. “We have to be up-front and tell people, ‘Hey, I’m going to use this tool for X, Y, and Z’ and provide a rationale,” he says. People then receive AI-generated messages with that prior context, rather than assuming their therapist is “trying to be sneaky.”

Psychologists are often working at the limits of their capacity, and levels of burnout in the profession are high, according to 2023 research conducted by the American Psychological Association. That context makes the appeal of AI-powered tools obvious. 

But lack of disclosure risks permanently damaging trust. Hope decided to continue seeing her therapist, though she stopped working with her a little later for reasons she says were unrelated. “But I always thought about the AI Incident whenever I saw her,” she says.

Risking patient privacy

Beyond the transparency issue, many therapists are leery of using LLMs in the first place, says Margaret Morris, a clinical psychologist and affiliate faculty member at the University of Washington.

“I think these tools might be really valuable for learning,” she says, noting that therapists should continue developing their expertise over the course of their career. “But I think we have to be super careful about patient data.” Morris calls Declan’s experience “alarming.” 

Therapists need to be aware that general-purpose AI chatbots like ChatGPT are not approved by the US Food and Drug Administration and are not HIPAA compliant, says Pardis Emami-Naeini, assistant professor of computer science at Duke University, who has researched the privacy and security implications of LLMs in a health context. (HIPAA is a set of US federal regulations that protect people’s sensitive health information.)

“This creates significant risks for patient privacy if any information about the patient is disclosed or can be inferred by the AI,” she says.

In a recent paper, Emami-Naeini found that many users wrongly believe ChatGPT is HIPAA compliant, creating an unwarranted sense of trust in the tool. “I expect some therapists may share this misconception,” she says.

As a relatively open person, Declan says, he wasn’t completely distraught to learn how his therapist was using ChatGPT. “Personally, I am not thinking, ‘Oh, my God, I have deep, dark secrets,’” he said. But it did still feel violating: “I can imagine that if I was suicidal, or on drugs, or cheating on my girlfriend … I wouldn’t want that to be put into ChatGPT.”

When using AI to help with email, “it’s not as simple as removing obvious identifiers such as names and addresses,” says Emami-Naeini. “Sensitive information can often be inferred from seemingly nonsensitive details.”

She adds, “Identifying and rephrasing all potential sensitive data requires time and expertise, which may conflict with the intended convenience of using AI tools. In all cases, therapists should disclose their use of AI to patients and seek consent.” 

A growing number of companies, including Heidi Health, Upheal, Lyssn, and Blueprint, are marketing specialized tools to therapists, such as AI-assisted note-taking, training, and transcription services. These companies say they are HIPAA compliant and store data securely using encryption and pseudonymization where necessary. But many therapists are still wary of the privacy implications—particularly of services that necessitate the recording of entire sessions.

“Even if privacy protections are improved, there is always some risk of information leakage or secondary uses of data,” says Emami-Naeini.

A 2020 hack on a Finnish mental health company, which resulted in tens of thousands of clients’ treatment records being accessed, serves as a warning. People on the list were blackmailed, and subsequently the entire trove was publicly released, revealing extremely sensitive details such as peoples’ experiences of child abuse and addiction problems.

What therapists stand to lose

In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.

A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.

Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.

A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable.

Daniel Kimmel, a psychiatrist and neuroscientist at Columbia University, conducted experiments with ChatGPT where he posed as a client having relationship troubles. He says he found the chatbot was a decent mimic when it came to “stock-in-trade” therapeutic responses, like normalizing and validating, asking for additional information, or highlighting certain cognitive or emotional associations.

However, “it didn’t do a lot of digging,” he says. It didn’t attempt “to link seemingly or superficially unrelated things together into something cohesive … to come up with a story, an idea, a theory.”

“I would be skeptical about using it to do the thinking for you,” he says. Thinking, he says, should be the job of therapists.

Therapists could save time using AI-powered tech, but this benefit should be weighed against the needs of patients, says Morris: “Maybe you’re saving yourself a couple of minutes. But what are you giving away?”

Can an AI doppelgänger help me do my job?

Everywhere I look, I see AI clones. On X and LinkedIn, “thought leaders” and influencers offer their followers a chance to ask questions of their digital replicas. OnlyFans creators are having AI models of themselves chat, for a price, with followers. “Virtual human” salespeople in China are reportedly outselling real humans. 

Digital clones—AI models that replicate a specific person—package together a few technologies that have been around for a while now: hyperrealistic video models to match your appearance, lifelike voices based on just a couple of minutes of speech recordings, and conversational chatbots increasingly capable of holding our attention. But they’re also offering something the ChatGPTs of the world cannot: an AI that’s not smart in the general sense, but that ‘thinks’ like you do. 

Who are they for? Delphi, a startup that recently raised $16 million from funders including Anthropic and actor/director Olivia Wilde’s venture capital firm, Proximity Ventures, helps famous people create replicas that can speak with their fans in both chat and voice calls. It feels like MasterClass—the platform for instructional seminars led by celebrities—vaulted into the AI age. On its website, Delphi writes that modern leaders “possess potentially life-altering knowledge and wisdom, but their time is limited and access is constrained.”

It has a library of official clones created by famous figures that you can speak with. Arnold Schwarzenegger, for example, told me, “I’m here to cut the crap and help you get stronger and happier,” before informing me cheerily that I’ve now been signed up to receive the Arnold’s Pump Club newsletter. Even if his or other celebrities’ clones fall short of Delphi’s lofty vision of spreading “personalized wisdom at scale,” they at least seem to serve as a funnel to find fans, build mailing lists, or sell supplements.

But what about for the rest of us? Could well-crafted clones serve as our stand-ins? I certainly feel stretched thin at work sometimes, wishing I could be in two places at once, and I bet you do too. I could see a replica popping into a virtual meeting with a PR representative, not to trick them into thinking it’s the real me, but simply to take a brief call on my behalf. A recording of this call might summarize how it went. 

To find out, I tried making a clone. Tavus, a Y Combinator alum that raised $18 million last year, will build a video avatar of you (plans start at $59 per month) that can be coached to reflect your personality and can join video calls. These clones have the “emotional intelligence of humans, with the reach of machines,” according to the company. “Reporter’s assistant” does not appear on the company’s site as an example use case, but it does mention therapists, physician’s assistants, and other roles that could benefit from an AI clone.

For Tavus’s onboarding process, I turned on my camera, read through a script to help it learn my voice (which also acted as a waiver, with me agreeing to lend my likeness to Tavus), and recorded one minute of me just sitting in silence. Within a few hours, my avatar was ready. Upon meeting this digital me, I found it looked and spoke like I do (though I hated its teeth). But faking my appearance was the easy part. Could it learn enough about me and what topics I cover to serve as a stand-in with minimal risk of embarrassing me?

Via a helpful chatbot interface, Tavus walked me through how to craft my clone’s personality, asking what I wanted the replica to do. It then helped me formulate instructions that became its operating manual. I uploaded three dozen of my stories that it could use to reference what I cover. It may have benefited from having more of my content—interviews, reporting notes, and the like—but I would never share that data for a host of reasons, not the least of which being that the other people who appear in it have not consented to their sides of our conversations being used to train an AI replica.

So in the realm of AI—where models learn from entire libraries of data—I didn’t give my clone all that much to learn from, but I was still hopeful it had enough to be useful. 

Alas, conversationally it was a wild card. It acted overly excited about story pitches I would never pursue. It repeated itself, and it kept saying it was checking my schedule to set up a meeting with the real me, which it could not do as I never gave it access to my calendar. It spoke in loops, with no way for the person on the other end to wrap up the conversation. 

These are common early quirks, Tavus’s cofounder Quinn Favret told me. The clones typically rely on Meta’s Llama model, which “often aims to be more helpful than it truly is,” Favret says, and developers building on top of Tavus’s platform are often the ones who set instructions for how the clones finish conversations or access calendars.

For my purposes, it was a bust. To be useful to me, my AI clone would need to show at least some basic instincts for understanding what I cover, and at the very least not creep out whoever’s on the other side of the conversation. My clone fell short.

Such a clone could be helpful in other jobs, though. If you’re an influencer looking for ways to engage with more fans, or a salesperson for whom work is a numbers game and a clone could give you a leg up, it might just work. You run the risk that your replica could go off the rails or embarrass the real you, but the tradeoffs might be reasonable. 

Favret told me some of Tavus’s bigger customers are companies using clones for health-care intake and job interviews. Replicas are also being used in corporate role-play, for practicing sales pitches or having HR-related conversations with employees, for example.

But companies building clones are promising that they will be much more than cold-callers or telemarketing machines. Delphi says its clones will offer “meaningful, personal interactions at infinite scale,” and Tavus says its replicas have “a face, a brain, and memories” that enable “meaningful face-to-face conversations.” Favret also told me a growing number of Tavus’s customers are building clones for mentorship and even decision-making, like AI loan officers who use clones to qualify and filter applicants.

Which is sort of the crux of it. Teaching an AI clone discernment, critical thinking, and taste—never mind the quirks of a specific person—is still the stuff of science fiction. That’s all fine when the person chatting with a clone is in on the bit (most of us know that Schwarzenegger’s replica, for example, will not coach me to be a better athlete).

But as companies polish clones with “human” features and exaggerate their capabilities, I worry that people chasing efficiency will start using their replicas at best for roles that are cringeworthy, and at worst for making decisions they should never be entrusted with. In the end, these models are designed for scale, not fidelity. They can flatter us, amplify us, even sell for us—but they can’t quite become us.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

What health care providers actually want from AI

In a market flooded with AI promises, health care decision-makers are no longer dazzled by flashy demos or abstract potential. Today, they want pragmatic and pressure-tested products. They want solutions that work for their clinicians, staff, patients, and their bottom line.

To gain traction in 2025 and beyond, health care providers are looking for real-world solutions in AI right now.

Solutions that fix real problems

Hospitals and health systems are looking at AI-enabled solutions that target their most urgent pain points: staffing shortages, clinician burnout, rising costs, and patient bottlenecks. These operational realities keep leadership up at night, and AI solutions  must directly address them.

For instance, hospitals and health systems are eager for AI tools that can reduce documentation burden for physicians and nurses. Natural language processing (NLP) solutions that auto-generate clinical notes or streamline coding to free up time for direct patient care are far more compelling pitches than generic efficiency gains. Similarly, predictive analytics that help optimize staffing levels or manage patient flows can directly address operational workflow and improve throughput.

Ultimately, if an AI solution doesn’t target these critical issues and deliver tangible benefits, it’s unlikely to capture serious buyer interest.

Demonstrate real-world results

AI solutions need validation in environments that mirror actual care settings. The first step toward that is to leverage high-quality, well-curated real-world data to drive reliable insights and avoid misleading results when building and refining AI models. 

Then, hospitals and health systems need evidence that the solution does what it claims to do, for instance through independent-third party validation, pilot projects, peer-reviewed publications, or documented case studies.

Mayo Clinic Platform offers a rigorous independent process where clinical, data science, and regulatory experts evaluate a solution for intended use, proposed value, and clinical and algorithmic performance, which gives innovators the credibility their solutions need to win the confidence of health-care leaders.    

Integration with existing systems

With so many demands, health-care IT leaders have little patience for standalone AI tools that create additional complexity. They want solutions that integrate seamlessly into existing systems and workflows. Compatibility with major electronic health record (EHR) platforms, robust APIs, and smooth data ingestion processes are now baseline requirements.

Custom integrations that require significant IT resources—or worse, create duplicative work—are deal breakers for many organizations already stretched thin. The less disruption an AI solution introduces, the more likely it is to gain traction. This is the reason solution developers are turning to platforms like Mayo Clinic Platform Solutions Studio, a program that provides seamless integration, single implementation, expert guidance to reduce risk, and a simplified process to accelerate solution adoption among healthcare providers. 

Explainability and transparency

The importance of trust cannot be overstated when it comes to health care, and transparency and explainability are critical to establishing trust in AI. As AI models grow more complex, health-care providers recognize that simply knowing what an algorithm predicts isn’t enough. They also need to understand how it arrived at that insight.

Health-care organizations are increasingly wary of black-box AI systems whose logic remains opaque. Instead, they’re demanding solutions that offer clear, understandable explanations clinicians can relay confidently to peers, patients, and regulators.

As McKinsey research shows, organizations that embed explainability into their AI strategy not only reduce risk but also see higher adoption, better performance outcomes, and stronger financial returns. Solution developers that can demystify their models, provide transparent performance metrics, and build trust at every level will have a significant edge in today’s health-care market.

Clear ROI and low implementation burden

Hospitals and health systems want to know precisely how quickly an AI solution will pay for itself, how much staff time it will save, and what costs it will help offset. The more specific and evidence-backed the answers, the better rate of adoption.

Solution developers that offer comprehensive training and responsive support are far more likely to win deals and keep customers satisfied over the long term.

Alignment with regulatory and compliance needs

As AI adoption grows, so does regulatory scrutiny. Health-care providers are increasingly focused on ensuring that any new solution complies with HIPAA, data privacy laws, and emerging guidelines around AI governance and bias mitigation.

Solution developers that can proactively demonstrate compliance provide significant peace of mind. Transparent data handling practices, rigorous security measures, and alignment with ethical AI principles are all becoming essential selling points as well.

A solution developer that understands health care

Finally, it’s not just about the technology. Health-care providers want partners that genuinely understand the complexities of clinical care and hospital operations. They’re looking for partners that speak the language of health care, grasp the nuances of change management, and appreciate the realities of delivering patient care under tight margins and high stakes.

Successful AI vendors recognize that even the best technology must fit into a highly human-centered and often unpredictable environment. Long-term partnerships, not short-term sales, are the goal.

Delivering true value with AI

To earn their trust and investment, AI developers must focus relentlessly on solving real problems, demonstrating proven results, integrating without friction, and maintaining transparency and compliance.

Those that deliver on these expectations will have the chance to help shape the future of health care.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.

How healthcare accelerator programs are changing care

As healthcare faces mounting pressures, from rising costs and an aging population to widening disparities, forward thinking innovations are more essential than ever.

Accelerator programs have proven to be powerful launchpads for health tech companies, often combining resources, mentorship, and technology that startups otherwise would not have access to. By joining these fast-moving platforms, startups are better able to rapidly innovate, enhance, and scale their healthcare solutions, bringing transformative approaches to hospitals and patients faster.

So, why are healthcare accelerators becoming essential to the evolution of the industry? There are key reasons why these programs are reshaping health innovation and explanations how they are helping to make care more personalized, proactive, and accessible.

Empowering growth and scaling impact       

Healthcare accelerator programs offer a powerful combination of guidance, resources, and connections to help early-stage startups grow, scale, and succeed in a complex industry. 

Participants typically benefit from: 

  • Expert mentorship from seasoned healthcare professionals, entrepreneurs, and industry leaders to navigate clinical, regulatory, and business challenges
  • Access to valuable resources such as clinical data, testing environments, and technical infrastructure to refine and validate health tech solutions
  • Strategic support for growth including investor introductions, partnership opportunities, and go-to-market guidance to expand reach and impact 

Speeding up innovation 

Accelerators help startups and early-stage companies bring their solutions to market faster by streamlining the path through one of the most complex industries: healthcare. Traditionally, innovation in this space is slowed by regulatory hurdles, extended sales cycles, clinical validation requirements, and fragmented data systems.  

Through structured support, accelerators help companies refine their product market fit, navigate compliance and regulatory landscapes, integrate with healthcare systems, and gather the clinical evidence needed to build trust and credibility. They also open doors to early pilot opportunities, customer feedback, and strategic partnerships, compressing what could take years into just a few months. 

By removing barriers and accelerating critical early steps, these programs enable digital health innovators to reach the market more efficiently, with stronger solutions and a clearer path to impact. 

Connecting startups with key stakeholders 

Today, many accelerator programs are developed by large healthcare organizations that are driving change from within. These accelerator programs are especially beneficial to startups since they have strong partnerships with hospitals, pharma companies, insurance providers, and regulators. This gives startups a chance to validate their ideas in real-world settings, gather clinical feedback early, and scale more effectively.  

Many accelerators also bring together people from different fields; doctors, engineers, data scientists, and designers, encouraging fresh perspectives on persistent problems like chronic disease management, preventative care, data interoperability, and patient engagement. 

Breaking barriers to global expansion 

Healthcare accelerator programs act as gateways for international digital health companies looking to enter the U.S. market, often considered one of the most complex and highly regulated healthcare landscapes in the world. These programs provide tailored support to navigate U.S. compliance standards, understand payer and provider dynamics, and tailor offerings to meet the needs of U.S. patients and care delivery models. 

Through market-specific mentorship, strategic introductions, and access to a robust health innovation ecosystem, accelerators help international startups overcome geographic and regulatory barriers, enabling global ideas to scale and make an impact where they’re needed most. 

Building the future of healthcare

The role of healthcare accelerator programs extends far beyond startup support. They are helping to redefine how innovation happens, shifting it from isolated efforts to collaborative ecosystems of change. By bridging gaps between early-stage technology and real-world implementation, these programs play a critical role in making healthcare more personalized, preventative, and equitable.

As the digital transformation of healthcare continues, accelerator programs will remain indispensable in cultivating the next generation of breakthroughs, ensuring that bold ideas are not only born, but brought to life in meaningful, measurable ways.

Spotlight: Mayo Clinic Platform_Accelerate

One standout example of this innovation-forward approach is Mayo Clinic Platform_Accelerate, a 30-week accelerator program designed to help health tech startups reach market readiness. Participants gain access to de-identified clinical data, prototyping labs, and guidance from experts across clinical, regulatory, and business domains.

By combining Mayo Clinic’s legacy of clinical excellence with a forward-thinking innovation model, the Mayo Clinic Platform_Accelerate program helps promising startups to refine their solutions and prepare for meaningful scale, transforming how care is delivered across the continuum.

Finding value in accelerator programs

In a time when healthcare must evolve faster than ever, accelerator programs have become vital to the industry’s future. By supporting early-stage innovators with the tools, mentorship, and networks they need to succeed, these programs are paving the way for smarter, safer, and more connected care.

Whether tackling chronic disease, reimagining patient engagement, or unlocking the power of data, the startups nurtured in accelerator programs are helping to shape a more resilient and responsive health system, one innovation at a time.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.

The Download: therapists secretly using AI, and Apple AirPods’ hearing aid potential

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Therapists are secretly using ChatGPT. Clients are triggered.

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen.

For the rest of the session, Declan was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen, who was taking what Declan was saying, putting it into ChatGPT, and then parroting its answers.

But Declan is not alone. In fact, a growing number of people are reporting receiving AI-generated communiqués from their therapists. Clients’ trust and privacy are being abandoned in the process. Read the full story.

—Laurie Clarke

Apple AirPods: a gateway hearing aid

—Ashley Shew

When the US Food and Drug Administration approved hearing-aid software for Apple’s AirPods Pro in September 2024, with a device price point around $200, I was excited.

I have hearing loss and tinnitus, and my everyday hearing aids cost just over $2,000. Ninety percent of the hearing-aid market is concentrated in the hands of a few companies, and there’s little competitive pricing. So I was thrilled that a major tech company has entered this field with the AirPods Pro 2. Here’s what I made of them.

This story is from our new print edition, which is all about the future of security. Subscribe here to catch future copies when they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 MAHA is in chaos
RFK Jr’s movement is tearing itself apart over what it wants to achieve. (WSJ $)
+ Trying to pressure food companies to alter their products is unlikely to work. (The Atlantic $)
+ Ultra-processed food makes up a sizable proportion of the American diet. (Axios)
+ RFK Jr’s plan to improve America’s diet is missing the point. (MIT Technology Review)

2 DOGE is using AI to target SEC rules to ditch
Experts fear its decisions won’t be checked by qualified humans. (The Information $)
+ Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)

3 Salesforce has replaced around 4,000 jobs with AI agents
It’s slashed its support staff team nearly in half. (SF Chronicle $)
+ Workers are trying to weather the AI-induced storm. (Vox)
+ AI is coming for the job market, security, and prosperity. (MIT Technology Review)

4 What’s up with China’s EV industry?
Its cutthroat competitive practices are starting to grate on the government. (NYT $)
+ The country’s robotmakers are on the rise. (FT $)
+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

5 A “nearly naked” black hole has been spotted
The never-before-seen black hole may have been created moments after the big bang. (The Guardian)

6 How to make quantum computers useful
Researchers have turned their attention towards making software for the machines. (FT $)
+ Why AI could eat quantum computing’s lunch. (MIT Technology Review)

7 OnlyFans has a piracy problem
Adult creators’ content isn’t staying behind the paywall. (404 Media)

8 These humans are paid to fix AI slop
Anyone can prompt AI, but the results aren’t always good. (NBC News)

9 The hottest gadget for kids is a landline phone 
And they’re learning phone etiquette for the first time. (Insider $)

10 Meet iTunes’ diehard fans
They’re eschewing streaming platforms in favor of their digital libraries. (WP $)
+ How to break free of Spotify’s algorithm. (MIT Technology Review)

Quote of the day

“The calculator doesn’t construct facts about world knowledge and give them to you.”

—Elisha Roberts, assistant director at the nonprofit Colorado Education Initiative, tells Bloomberg she doesn’t buy the idea that AI is comparable to other classroom tools like the calculator.

One more thing

Supershoes are reshaping distance running

Since 2016, when Nike introduced the Vaporfly, a paradigm-­shifting shoe that helped athletes run more efficiently (and therefore faster), the elite running world has muddled through a period of soul-searching over the impact of high-tech footwear on the sport.

“Supershoes” —which combine a lightweight, energy-­returning foam with a carbon-fiber plate for stiffness—have been behind every broken world record in distances from 5,000 meters to the marathon since 2020.

To some, this is a sign of progress. In much of the world, elite running lacks a widespread following. Record-breaking adds a layer of excitement. And the shoes have benefits beyond the clock: most important, they help minimize wear on the body and enable faster recovery from hard workouts and races.

Still, some argue that they’ve changed the sport too quickly. Read the full story

—Jonathan W. Rosen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Happy birthday to Keanu Reeves, who turns 61 today! Here’s a compilation of his hilariously bad acting in Bram Stroker’s Dracula.
+ Why do some cats hate water, yet others love it?
+ If you fancy setting a Guinness World Record, there’s a few still up for grabs.
+ To mark world coconut day (what do you mean, you forgot?), check out these delicious-looking recipes 🥥

Vibe Coding Is Ecommerce’s New Excel

Relying on AI-generated software can make ecommerce operators nervous until they realize the process is a bit like using a spreadsheet.

The term “vibe coding” describes an AI-first approach to software development. Non-technical owners and managers can describe a desired software tool in natural language via a GPT-like chat and have AI produce it.

The approach has significant potential for ecommerce, allowing companies to quickly experiment with custom software tools at a very low cost, nearly zero in some cases.

Vibe coding will not replace platforms such as Shopify or even developers, but it can help teams prototype, automate, and test ideas quickly — much like Excel did for finance.

Vibe Coding

Andrej Karpathy, a prominent computer scientist and former AI director at Tesla, coined the phrase vibe coding in February 2025.

Karpathy suggested allowing AI to generate most of the code, while humans provide natural language goals, feedback, and corrections.

The idea rests on five pillars.

  • Natural language. Programming instructions are conversational, as in “make a pop-up modal with a discount banner for a WooCommerce store.” AI acts on human intent.
  • Prototyping. AI produces prototypes rapidly, providing a minimal version of the desired application in minutes.
  • Iteration. The AI will not get it right with the first or even second prototype. Rather, vibe coding tools receive human feedback with each iteration and try again.
  • No coding skills. Understanding the coding process of AI is helpful, but it requires no programming skills.
  • Limited scope. Vibe coding will not replace software engineers at scale. Mission-critical applications still require professionals, much like spreadsheet power-users eventually graduate to advanced systems or databases.

Like a Spreadsheet

AI generated cartoon sketch of a male instructing a machine

For vibe coding, follow your instincts and instruct the AI on what the software should do.

Entrepreneur Joe Procopio likened vibe coding to the introduction of spreadsheets in a recent Inc. opinion article.

“In 1979, when Apple released VisiCalc, it was the killer app that made everyone’s job and life a lot easier. It was visual math in a box, unheard of at the time,” wrote Procopio.

Spreadsheets changed the world because they provided a visual layer over math and logic for non-programmers.

Folks using spreadsheets — then and now — did not need to know formulas or programming; instead, they typed values into cells, dragged across a table, and suddenly they had payroll models, inventory systems, and even entire annual budgets.

Vibe coding, Procopio argues, is a similar visual layer, but it abstracts software instead of math. An operator describes what the software should do in plain language, and the AI translates it into working code.

Applying the Analogy

Procopio’s analogy applies to ecommerce dashboards, automations, and even limitations.

Dashboards

Imagine an ecommerce shop wants to test whether adding social proof to a widget boosts conversions. The store owner might sign up for two or three apps — testing, upselling, social — to run the experiment for $150 per month in subscriptions, plus web development.

With vibe coding, it might take a few hours to describe and iterate a custom app that manages the testing, upselling, and social proof in one.

A spreadsheet enables a business to build its own reporting dashboard without hiring a data analyst. Vibe coding similarly creates a front-end experiment without subscribing to apps or hiring a developer.

Put another way, many early-stage ecommerce founders modeled business plans in Excel long before they had accountants. Likewise, ecommerce entrepreneurs can model new features via vibe coding before paying developers.

Automations

Spreadsheet macros are similar in concept to vibe coding automations.

For example, a marketing manager might download advertising performance reports from multiple sources, reformat them via macros, and upload the data into accounting software. With vibe coding, the manager can describe this workflow and generate a tool to automate it.

Other lightweight automations could include:

  • Converting supplier CSVs into Shopify’s import format.
  • Auto-generating invoices from order exports.
  • Flagging orders with mismatched billing and shipping addresses.

These are tedious, repetitive jobs that AI-generated scripts can handle much like a macro in Excel.

Limitations

My final comparison from Procopio’s spreadsheet analogy is knowing when to stop.

It’s a bad idea to run an entire ecommerce shop out of a few complicated Google Sheets, and vibe coding will not replace Shopify or any other software that solves robust problems or requires security and reliability.

Unlocking Vibe Coding

In short, vibe coding is a creative, iterative, AI-driven approach to building software where natural language replaces syntax, speed replaces polish, and experimentation replaces structure.

Making vibe coding as approachable as a spreadsheet means that just about any ecommerce operation can automate workflows, prototype features, and test ideas.

Google Avoids Breakup As Judge Bars Exclusive Default Search Deals via @sejournal, @MattGSouthern

A federal judge outlined remedies in the U.S. search antitrust case that bar Google from using exclusive default search deals but stop short of forcing a breakup.

Reuters reports that Google won’t have to divest Chrome or Android, but it may have to share some search data with competitors under court-approved terms.

Google says it will appeal.

What The Judge Ordered

Judge Amit P. Mehta barred Google from entering or maintaining exclusive agreements that tie the distribution of Search, Chrome, Google Assistant, or the Gemini app to other apps, licenses, or revenue-share arrangements.

The ruling allows Google to continue paying for placement but prohibits exclusivity that could block rivals.

The order also envisions Google making certain search and search-ad syndication services available to competitors at standard rates, alongside limited data sharing for “qualified competitors.”

Mehta ordered Google to share some search data with competitors under specific protections to help them improve their relevance and revenue. Google argued this could expose its trade secrets and plans to appeal the decision.

The judge directed the parties to meet and submit a revised final judgment by September 10. Once entered, the remedies would take effect 60 days later, run for six years, and be overseen by a technical committee. Final language could change based on the parties’ filing.

How We Got Here

In August 2024, Mehta found Google illegally maintained a monopoly in general search and related text ads.

Judge Amit P. Mehta wrote in his August 2024 opinion:

“Google is a monopolist, and it has acted as one to maintain its monopoly.”

This decision established the need for remedies. Today’s order focuses on distribution and data access, rather than breaking up the company.

What’s Going To Change

Ending exclusivity changes how contracts for default placements can be made across devices and browsers. Phone makers and carriers may need to update their agreements to follow the new rules.

However, the ruling doesn’t require any specific user experience change, like a choice screen. The results will depend on how new contracts are created and approved by the court.

Next Steps

Expect a gradual rollout if the final judgment follows today’s outline.

Here are the next steps to watch for:

  • The revised judgment that the parties will submit by September 10.
  • Changes to contracts between Google and distribution partners to meet the non-exclusivity requirement.
  • Any pilot programs or rules that specify who qualifies as a “qualified competitor” and what data they can access.

Separately, Google faces a remedies trial in the ad-tech case in late September. This trial could lead to changes that affect advertising and measurement.

Looking Ahead

If the parties submit a revised judgment by September 10, changes could start about 60 days after the court’s final order. This might shift if Google gets temporary relief during an appeal.

In the short term, expect contract changes rather than product updates.

The final judgment will determine who can access data and which types are included. If the program is limited, it may not significantly affect competition. If broader, competitors might enhance their relevance and profit over the six-year period.

Also watch the ad tech remedies trial this month. Its results, along with the search remedies, will shape how Google handles search and ads in the coming years.

Internal Linking Grows Up: Evolving From Link Juice To Entity Maps via @sejournal, @Kevin_Indig

Let’s reminisce for a moment. Do you remember how, back in 2020, we all obsessed over “link juice” and PageRank flow as far as internal links are concerned?

In 2025, what matters more is how your internal links define the entities and relationships on your site.

Internal linking is no longer just about distributing authority. It’s about:

  • Building your own semantic map that Google can trust.
  • Reinforcing your topical authority.
  • Earning a place in an AI-search-forward landscape.

The last full guide I wrote on internal linking strategies was in 2020, and – well – much has happened since then (to say the least).

And most internal linking guides treat links as simple “traffic routers,” ignoring their role in building entity context.

So today, yes, I’m revisiting some of the basic building blocks of SEO, but we’re going to expand how we think about internal linking.

If you’re already deep into entity-first SEO and apply it to your internal linking tactics, skip ahead to the action items to ensure you’re implementing it well.

For everyone else, I’ll explain why tightening up your internal linking structure isn’t just table stakes. It’s one of the simplest core levers to influence organic visibility.

Image Credit: Kevin Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Internal linking is the age-old SEO practice of connecting one page on your site to another page, all on the same domain.

These links act like the roads or highways that guide users through your content. But they also help search engines understand how your pages relate.

In the past, we thought about internal links as “pipes” for PageRank.

Add enough links from your homepage or other strong, well-ranking pages, and you’d push authority toward the URLs you wanted to rank.

That view isn’t wrong; it’s just incomplete.

Today, internal links aren’t just distributing authority. They’re defining the semantic structure of your site.

Internal linking isn’t simply a practice that routes people (and bots/crawlers) to the pages you want them to go to.

In fact, when we think about internal linking this way is exactly when we start to half-ass the practice or let it sit on the back burner.

The words you use in anchor text and the way you connect hubs of related content all signal to search engines: These are the entities your brand wants to be known for.

Strategic internal linking can do three critical things for your site:

  1. Reinforce entity authority. You’re signaling to Google, and everyone else, which concepts you want associated with your brand.
  2. Improve index stability. Pages that are well-linked internally are more likely to be crawled often – and that means they stay indexed and are likely to show up in AI-generated results. (This is especially for Bing optimization, which seems to struggle more with indexing than Google. Bing is often forgotten when it comes to AEO/GEO because everyone assumes ChatGPT only uses Google, but it doesn’t.)
  3. Drive user engagement. Smart placement and descriptive anchors help users explore more of your related content, increasing engagement signals.

Put simply: Internal links aren’t just SEO plumbing. They’re how you build a discoverable, authoritative entity graph inside your own site.

Generative AI being infused into all modalities of search means Google and LLMs aren’t just hiking all over the web searching for crawlable/indexable pages — search engines and LLMs are mapping relationships between entities and judging your brand’s authority accordingly.

But currently, there’s some disagreement on whether or not LLMs can navigate your site through internal links.

My hypothesis? LLMs do form entity relationships via your strategic use of internal links. But probably not through traditionally “crawling” them like search engines do, and more purely based on text signals on the page.

And if that turns out to be true – keeping in mind that LLMs often use search engine results to ground themselves – internal linking also benefits LLM optimization/AEO/GEO mostly by improving Google/Bing ranks, which LLMs heavily rely on.

I dropped the question over on LinkedIn, you can check out the discussion there. But a few responses stood out. (Take a look at the full thread, but I also highly recommend following these pros to learn more from each of them.)

Dan Petrovic, founder and CEO of Dejan SEO, gave a detailed answer about the differences between a) the types of LLM crawlers and b) the different LLMs and how they behave.

Image Credit: Kevin Indig

Lily Grozeva, head of SEO at Verto Digital, rightfully called out that we can all get the answer in our own logfiles.

Image Credit: Kevin Indig

Chee Lo, head of SEO at Trustpilot, shared his experience with Perplexity, which seems to be a bit more aggressive than other bots.

Image Credit: Kevin Indig

Sites with clear internal linking patterns that mirror how humans connect concepts are (in theory, more data will tell over time) better positioned to be included in AI-generated answers and entity-rich snippets.

Way back in 2019, I explained the following in Semantic content optimization with entities:

Entities are semantic, interconnected objects that help machines to understand explicit and implicit language. In simpler terms, they are words (nouns) that represent any type of object, concept, or subject … According to Cindy Krum and her fantastic entity series, Google seems to restructure its whole approach to indexing based on entities (while you’re at it, read AJ Kohn’s article about embeddings). Understanding entities and how Google uses them in search sharpens our standards for content creation, optimization, and the use of schema markup.

Entities are nouns like events, ideas, people, places, etc. They’re the building blocks of ideas and how those ideas relate to each other. (They’re not just “keywords.”)

Search engines and LLMS use semantic relationships between entities to (1) reduce ambiguity, (2) reinforce authority/canonical sources on your site, and (3) map out relationships between topics, features, services, and audiences across your site.

When you internally link pages together with strategically descriptive anchors, you’re telling search engines how your site fits together … and you’re training them on how entities across your site connect.

Therefore, by practicing internal linking through an entity-based lens, you’re creating stronger, clearer relationships and patterns for Google/search engines/LLMs to understand.

Entity-first SEO starts with defining the people, products, concepts, and places your brand “owns.”

If you’re a B2B SaaS company offering a CRM, those entities might include your:

  • Core product (CRM platform).
  • Features (pipeline management, email automation, reporting dashboards).
  • Use cases (sales enablement, customer support, marketing teams).
  • Personas/target ICPs (heads of sales at mid-market companies, startup founders scaling revenue teams, or enterprise IT buyers).

Taking this example, you’re going to think in terms of topic-first SEO:

  • Hub or pillar pages = parent entities. These are your central nodes – the definitive resource on a core concept. For a B2B SaaS CRM, it might be the CRM platform overview page.
  • Cluster pages = sub-entities. These are the supporting nodes that expand on the hub. For a CRM, the CRM hub branches into feature pages like pipeline management, email automation, and reporting dashboards.
  • Cross-link clusters to show relatedness. Don’t just point everything back to the hub – connect the clusters to each other to model real-world relationships. In the instance of the CRM, pipeline management integrates with email automation to shorten deal cycles.
  • Navigation and breadcrumbs reinforce hierarchy. The visible structure tells both users and Google how entities fit together. Example: Home → Products → CRM → Pipeline Management.
  • Include personas in the implementation. This reinforces the relationship: This persona → has this pain point → solved by this feature → within this product topic.

For example, look at this topic cluster map created with Screaming Frog:

Image Credit: Kevin Indig

It shows two clusters with nodes very close together (red and orange) and three other clusters that are spread apart (green, blue, and purple). Guess which clusters outperform the others in organic search? Red and orange!

Here’s how you connect those entities into a meaningful structure in the copy on the page:

1. Anchor text = entity disambiguation.

Instead of linking with vague text, use descriptive anchors that clarify which entity the link refers to. For example, if your CRM has a feature page about pipeline management, link to it with “sales pipeline management CRM feature” language.

2. Consistency matters.

If you always link to that pipeline management page with variations like “pipeline automation tool,” “deal tracking software,” and “CRM feature,” you dilute the entity connection. (But variations like “pipeline management tool,” “sales pipeline management CRM feature,” and “pipeline management features” are derivatives.)

By sticking to clear, consistent anchors, you signal to Google that this is the page that defines “pipeline management” for your brand.

3. Context strengthens meaning.

The sentence or paragraph around the link can add semantic weight. For example:

“Our CRM includes pipeline management, so your sales team can track every deal from prospecting to close.”

That tells Google (and users) that pipeline management isn’t just a phrase; it’s a core feature within the CRM product.

4. Include personas.

Making personas a criterion for internal linking is a no-brainer, because from a psychological perspective, a link automatically signals “there’s more for you here.”

If your internal link is placed on the right word that triggers a response in your target ICPs (and the right areas of the page), it increases the chance of people staying on the site. It’s also just a better experience – and good customer service – to help site visitors find the right offering specifically for themselves, all with the goal to increase trust and the chances they take an action or convert.

If one of your ICPs is head of Sales at mid-market SaaS companies, you might internally link from a blog article like “10 Ways SaaS Sales Leaders Can Shorten Their Sales Cycle” directly to your pipeline management feature page, while using copy surrounding that link that explains how your offering solves this problem. That link makes the relationship explicit: This is the feature that solves this persona’s pain point.

Ultimately, think of every internal link as a connector in your brand’s knowledge graph.

Together, these links show how entities and topics (like CRM platform → pipeline management → sales enablement → head of sales persona) relate to each other, and why your site is authoritative on them.

Amanda Johnson jumping in here to add: Basically, show + tell people (and search engines/LLMs) what you want them to know via literal semantics. It really is that simple. No need to overthink this. Use clear, descriptive, accurate anchor text for the internally linked page, use it consistently, and give context as to how/why the page is linked there with surrounding copy.

Ultimately, if you practice internal linking thoughtfully and methodically, you end up with a better user experience and more thorough reinforcement of internal entity relationships (which can improve topical authority signals).

Worried that your most important pages aren’t getting enough visibility because you haven’t set up a clear linking structure? Following the guidance above will help you resolve this and set up a clear internal linking system.

And using tools that have internal link auditing (like Semrush, Ahrefs, Clearscope, Surfer, etc.) will help you implement your system. Some SEO tools also give page-level internal linking recommendations and copy suggestions to anchor the text to.

Internal linking hasn’t just been about crawlability for some time now.

By structuring links around topics, entities, (and even user journeys of your target personas), you communicate your site’s semantic map to Google and LLMs.


Featured Image: Paulo Bobita/Search Engine Journal

Here’s how we picked this year’s Innovators Under 35

Next week, we’ll publish our 2025 list of Innovators Under 35, highlighting smart and talented people who are working in many areas of emerging technology. This new class features 35 accomplished founders, hardware engineers, roboticists, materials scientists, and others who are already tackling tough problems and making big moves in their careers. All are under the age of 35. 

One is developing a technology to reduce emissions from shipping, while two others are improving fertility treatments and creating new forms of contraception. Another is making it harder for people to maliciously share intimate images online. And quite a few are applying artificial intelligence to their respective fields in novel ways. 

We’ll also soon reveal our 2025 Innovator of the Year, whose technical prowess is helping physicians diagnose and treat critically ill patients more quickly. What’s more (here’s your final hint), our winner even set a world record as a result of this work. 

MIT Technology Review first published a list of Innovators Under 35 in 1999. It’s a grand tradition for us, and we often follow the work of various featured innovators for years, even decades, after they appear on the list. So before the big announcement, I want to take a moment to explain how we select the people we recognize each year. 

Step 1: Call for nominations

Our process begins with a call for nominations, which typically goes out in the final months of the previous year and is open to anyone, anywhere in the world. We encourage people to nominate themselves, which takes just a few minutes. This method helps us discover people doing important work that we might not otherwise encounter. 

This year we had 420 nominations. Two-thirds of our candidates were put forward by someone else and one-third nominated themselves. We received nominations for people located in about 40 countries. Nearly 70% were based in the United States, with the UK, Switzerland, China, and the United Arab Emirates, respectively, having the next-highest concentrations. 

After nominations close, a few editors then spend several weeks reviewing the nominees and selecting semifinalists. During this phase, we look for people who have developed practical solutions to societal issues or made important scientific advances that could translate into new technologies. Their work should have the potential for broad impact—it can’t be niche or incremental. And what’s unique about their approach must be clear. 

Step 2: Semifinalist applications 

This year, we winnowed our initial list of hundreds of nominees to 108 semifinalists. Then we asked those entrants for more information to help us get to know them better and evaluate their work. 

We request three letters of reference and a résumé from each semifinalist, and we ask all of them to answer a few short questions about their work. We also give them the option to share a video or pass along relevant journal articles or other links to help us learn more about what they do.

Step 3: Expert judges weigh in

Next, we bring in dozens of experts to vet the semifinalists. This year, 38 judges evaluated and scored the applications. We match the contenders with judges who work in similar fields whenever possible. At least two judges review each entrant, though most are seen by three. 

All these judges volunteer their time, and some return to help year after year. A few of our longtime judges include materials scientists Yet-Ming Chiang (MIT) and Julia Greer (Caltech), MIT neuroscientist Ed Boyden, and computer scientist Ben Zhao of the University of Chicago. 

John Rogers, a materials scientist and biomedical engineer at Northwestern University, has been a judge for more than a decade (and was featured on our very first Innovators list, in 1999). Here’s what he had to say about why he stays involved: “This award is compelling because it recognizes young people with scientific achievements that are not only of fundamental interest but also of practical significance, at the highest levels.” 

Step 4: Editors make the final calls 

In a final layer of vetting, editors who specialize in covering biotechnology, climate and energy, and artificial intelligence review the semifinalists whom judges scored highly in their respective areas. Staff editors and reporters can also nominate people they’ve come across in their coverage, and we add them to the mix for consideration. 

Last, a small team of senior editors reviews all the semifinalists and the judges’ scores, as well as our own staff’s recommendations, and selects 35 honorees. We aim for a good combination of people from a variety of disciplines working in different regions of the world. And we take a staff vote to pick an Innovator of the Year—someone whose work we particularly admire. 

In the end, it’s impossible to include every deserving individual on our list. But by incorporating both external nominations and outside expertise from our judges, we aim to make the evaluation process as rigorous and open as possible.  

So who made the cut this year? Come back on September 8 to find out.