Building a high performance data and AI organization (2nd edition)

Four years is a lifetime when it comes to artificial intelligence. Since the first edition of this study was published in 2021, AI’s capabilities have been advancing at speed, and the advances have not slowed since generative AI’s breakthrough. For example, multimodality— the ability to process information not only as text but also as audio, video, and other unstructured formats—is becoming a common feature of AI models. AI’s capacity to reason and act autonomously has also grown, and organizations are now starting to work with AI agents that can do just that.

Amid all the change, there remains a constant: the quality of an AI model’s outputs is only ever as good as the data
that feeds it. Data management technologies and practices have also been advancing, but the second edition of this study suggests that most organizations are not leveraging those fast enough to keep up with AI’s development. As a result of that and other hindrances, relatively few organizations are delivering the desired business results from their AI strategy. No more than 2% of senior executives we surveyed rate their organizations highly in terms of delivering results from AI.

To determine the extent to which organizational data performance has improved as generative AI and other AI advances have taken hold, MIT Technology Review Insights surveyed 800 senior data and technology executives. We also conducted in-depth interviews with 15 technology and business leaders.

Key findings from the report include the following:

Few data teams are keeping pace with AI. Organizations are doing no better today at delivering on data strategy than in pre-generative AI days. Among those surveyed in 2025, 12% are self-assessed data “high achievers” compared with 13% in 2021. Shortages of skilled talent remain a constraint, but teams also struggle with accessing fresh data, tracing lineage, and dealing with security complexity—important requirements for AI success.

Partly as a result, AI is not fully firing yet. There are even fewer “high achievers” when it comes to AI. Just 2% of respondents rate their organizations’ AI performance highly today in terms of delivering measurable business results. In fact, most are still struggling to scale generative AI. While two thirds have deployed it, only 7% have done so widely.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

An AI adoption riddle

A few weeks ago, I set out on what I thought would be a straightforward reporting journey. 

After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?

I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised. 

But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives? 

There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot.

But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to. 

“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it. 

The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons.

Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.” 

Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise. 

So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)

“We will never build a sex robot,” says Mustafa Suleyman

<div data-chronoton-summary="

  • Balancing humanlike interaction with safety concerns: Suleyman emphasizes that Microsoft’s new Copilot features—including group chat and the “Real Talk” personality—are designed to keep AI as a tool serving humanity rather than a replacement for human connection. The company deliberately avoids building chatbots that encourage romantic or sexual relationships, drawing clear boundaries where others in the industry see market opportunity.
  • Personality as craft, not deception: While acknowledging that engaging personalities make AI more useful, Suleyman argues the industry must learn to “sculpt” emotional intelligence carefully.
  • Reframing the “digital species” metaphor: Suleyman clarifies that describing AI as a new digital species isn’t endorsing consciousness or rights for machines; rather, it’s a warning about what’s coming that demands proper containment. He insists the goal is keeping AI subordinate to human interests, not granting it autonomy or moral consideration that would distract from protecting actual human rights.

” data-chronoton-post-id=”1126781″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Mustafa Suleyman, CEO of Microsoft AI, is trying to walk a fine line. On the one hand, he thinks that the industry is taking AI in a dangerous direction by building chatbots that present as human: He worries that people will be tricked into seeing life instead of lifelike behavior. In August, he published a much-discussed post on his personal blog that urged his peers to stop trying to make what he called “seemingly conscious artificial intelligence,” or SCAI.

On the other hand, Suleyman runs a product shop that must compete with those peers. Last week, Microsoft announced a string of updates to its Copilot chatbot, designed to boost its appeal in a crowded market in which customers can pick and choose between a pantheon of rival bots that already includes ChatGPT, Perplexity, Gemini, Claude, DeepSeek, and more.

I talked to Suleyman about the tension at play when it comes to designing our interactions with chatbots and his ultimate vision for what this new technology should be.

One key Copilot update is a group-chat feature that lets multiple people talk to the chatbot at the same time. A big part of the idea seems to be to stop people from falling down a rabbit hole in a one-on-one conversation with a yes-man bot. Another feature, called Real Talk, lets people tailor how much Copilot pushes back on you, dialing down the sycophancy so that the chatbot challenges what you say more often.

Copilot also got a memory upgrade, so that it can now remember your upcoming events or long-term goals and bring up things that you told it in past conversations. And then there’s Mico, an animated yellow blob—a kind of Chatbot Clippy—that Microsoft hopes will make Copilot more accessible and engaging for new and younger users.  

Microsoft says the updates were designed to make Copilot more expressive, engaging, and helpful. But I’m curious how far those features can be pushed without starting down the SCAI path that Suleyman has warned about.  

Suleyman’s concerns about SCAI come at a time when we are starting to hear more and more stories about people being led astray by chatbots that are too engaging, too expressive, too helpful. OpenAI is being sued by the parents of a teenager who they allege was talked into killing himself by ChatGPT. There’s even a growing scene that celebrates romantic relationships with chatbots.

With all that in mind, I wanted to dig a bit deeper into Suleyman’s views. Because a couple of years ago he gave a TED Talk in which he told us that the best way to think about AI is as a new kind of digital species. Doesn’t that kind of hype feed the misperceptions Suleyman is now concerned about?  

In our conversation, Suleyman told me what he was trying to get across in that TED Talk, why he really believes SCAI is a problem, and why Microsoft would never build sex robots (his words). He had a lot of answers, but he left me with more questions.

Our conversation has been edited for length and clarity.

In an ideal world, what kind of chatbot do you want to build? You’ve just launched a bunch of updates to Copilot. How do you get the balance right when you’re building a chatbot that has to compete in a market in which people seem to value humanlike interaction, but you also say you want to avoid seemingly conscious AI?

It’s a good question. With group chat, this will be the first time that a large group of people will be able to speak to an AI at the same time. It really is a way of emphasizing that AIs shouldn’t be drawing you out of the real world. They should be helping you to connect, to bring in your family, your friends, to have community groups, and so on.

That is going to become a very significant differentiator over the next few years. My vision of AI has always been one where an AI is on your team, in your corner.

This is a very simple, obvious statement, but it isn’t about exceeding and replacing humanity—it’s about serving us. That should be the test of technology at every step. Does it actually, you know, deliver on the quest of civilization, which is to make us smarter and happier and more productive and healthier and stuff like that?

So we’re just trying to build features that constantly remind us to ask that question, and remind our users to push us on that issue.

Last time we spoke, you told me that you weren’t interested in making a chatbot that would role-play personalities. That’s not true of the wider industry. Elon Musk’s Grok is selling that kind of flirty experience. OpenAI has said it’s interested in exploring new adult interactions with ChatGPT. There’s a market for that. And yet this is something you’ll just stay clear of?

Yeah, we will never build sex robots. Sad in a way that we have to be so clear about that, but that’s just not our mission as a company. The joy of being at Microsoft is that for 50 years, the company has built, you know, software to empower people, to put people first.

Sometimes, as a result, that means the company moves slower than other startups and is more deliberate and more careful. But I think that’s a feature, not a bug, in this age, when being attentive to potential side effects and longer-term consequences is really important.

And that means what, exactly?

We’re very clear on, you know, trying to create an AI that fosters a meaningful relationship. It’s not that it’s trying to be cold and anodyne—it cares about being fluid and lucid and kind. It definitely has some emotional intelligence.

So where does it—where do you—draw those boundaries?

Our newest chat model, which is called Real Talk, is a little bit more sassy. It’s a bit more cheeky, it’s a bit more fun, it’s quite philosophical. It’ll happily talk about the big-picture questions, the meaning of life, and so on. But if you try and flirt with it, it’ll push back and it’ll be very clear—not in a judgmental way, but just, like: “Look, that’s not for me.”

There are other places where you can go to get that kind of experience, right? And I think that’s just a decision we’ve made as a company.

Is a no-flirting policy enough? Because if the idea is to stop people even imagining an entity, a consciousness, behind the interactions, you could still get that with a chatbot that wanted to keep things SFW. You know, I can imagine some people seeing something that’s not there even with a personality that’s saying, hey, let’s keep this professional.

Here’s a metaphor to try to make sense of it. We hold each other accountable in the workplace. There’s an entire architecture of boundary management, which essentially sculpts human behavior to fit a mold that’s functional and not irritating.

The same is true in our personal lives. The way that you interact with your third cousin is very different to the way you interact with your sibling. There’s a lot to learn from how we manage boundaries in real human interactions.

It doesn’t have to be either a complete open book of emotional sensuality or availability—drawing people into a spiraled rabbit hole of intensity—or, like, a cold dry thing. There’s a huge spectrum in between, and the craft that we’re learning as an industry and as a species is to sculpt these attributes.

And those attributes obviously reflect the values of the companies that design them. And I think that’s where Microsoft has a lot of strengths, because our values are pretty clear, and that’s what we’re standing behind.

A lot of people seem to like personalities. Some of the backlash to GPT-5, for example, was because the previous model’s personality had been taken away. Was it a mistake for OpenAI to have put a strong personality there in the first place, to give people something that they then missed?

No, personality is great. My point is that we’re trying to sculpt personality attributes in a more fine-grained way, right?

Like I said, Real Talk is a cool personality. It’s quite different to normal Copilot. We are also experimenting with Mico, which is this visual character, that, you know, people—some people—really love. It’s much more engaging. It’s easier to talk to about all kinds of emotional questions and stuff.

I guess this is what I’m trying to get straight. Features like Mico are meant to make Copilot more engaging and nicer to use, but it seems to go against the idea of doing whatever you can to stop people thinking there’s something there that you are actually having a friendship with.

Yeah. I mean, it doesn’t stop you necessarily. People want to talk to somebody, or something, that they like. And we know that if your teacher is nice to you at school, you’re going to be more engaged. The same with your manager, the same with your loved ones. And so emotional intelligence has always been a critical part of the puzzle, so it’s not to say that we don’t want to pursue it.

It’s just that the craft is in trying to find that boundary. And there are some things which we’re saying are just off the table, and there are other things which we’re going to be more experimental with. Like, certain people have complained that they don’t get enough pushback from Copilot—they want it to be more challenging. Other people aren’t looking for that kind of experience—they want it to be a basic information provider. The task for us is just learning to disentangle what type of experience to give to different people.

I know you’ve been thinking about how people engage with AI for some time. Was there an inciting incident that made you want to start this conversation in the industry about seemingly conscious AI?

I could see that there was a group of people emerging in the academic literature who were taking the question of moral consideration for artificial entities very seriously. And I think it’s very clear that if we start to do that, it would detract from the urgent need to protect the rights of many humans that already exist, let alone animals.

If you grant AI rights, that implies—you know—fundamental autonomy, and it implies that it might have free will to make its own decisions about things. So I’m really trying to frame a counter to that, which is that it won’t ever have free will. It won’t ever have complete autonomy like another human being.

AI will be able to take actions on our behalf. But these models are working for us. You wouldn’t want a pack of, you know, wolves wandering around that weren’t tame and that had complete freedom to go and compete with us for resources and weren’t accountable to humans. I mean, most people would think that was a bad idea and that you would want to go and kill the wolves.

Okay. So the idea is to stop some movement that’s calling for AI welfare or rights before it even gets going, by making sure that we don’t build AI that appears to be conscious? What about not building that kind of AI because certain vulnerable people may be tricked by it in a way that may be harmful? I mean, those seem to be two different concerns.

I think the test is going to be in the kinds of features the different labs put out and in the types of personalities that they create. Then we’ll be able to see how that’s affecting human behavior.

But is it a concern of yours that we are building a technology that might trick people into seeing something that isn’t there? I mean, people have claimed they’ve seen sentience inside far less sophisticated models than we have now. Or is that just something that some people will always do?

It’s possible. But my point is that a responsible developer has to do our best to try and detect these patterns emerging in people as quickly as possible and not take it for granted that people are going to be able to disentangle those kinds of experiences themselves.

When I read your post about seemingly conscious AI, I was struck by a line that says: “We must build AI for people; not to be a digital person.” It made me think of a TED Talk you gave last year where you say that the best way to think about AI is as a new kind of digital species. Can you help me understand why talking about this technology as a digital species isn’t a step down the path of thinking about AI models as digital persons or conscious entities?

I think the difference is that I’m trying to offer metaphors that make it easier for people to understand where things might be headed, and therefore how to avert that and how to control it.

Okay.

It’s not to say that we should do those things. It’s just pointing out that this is the emergence of a technology which is unique in human history. And if you just assume that it’s a tool or just a chatbot or a dumb— you know, I kind of wrote that TED Talk in the context of a lot of skepticism. And I think it’s important to be clear-eyed about what’s coming so that one can think about the right guardrails.

And yet, if you’re telling me this technology is a new digital species, I have some sympathy for the people who say, well, then we need to consider welfare.

I wouldn’t. [He starts laughing.] Just not in the slightest. No way. It’s not a direction that any of us want to go in.

No, that’s not what I meant. I don’t think chatbots should have welfare. I’m saying I’d have some sympathy for where such people were coming from when they hear, you know, Mustafa Suleyman tell them that this thing he’s building was a new digital species. I’d understand why they might then say that they wanted to stand up for it. I’m saying the words we use matter, I guess.

The rest of the TED Talk was all about how to contain AI and how not to let this species take over, right? That was the whole point of setting it up as, like, this is what’s coming. I mean, that’s what my whole book [The Coming Wave, published in 2023] was about—containment and alignment and stuff like that. There’s no point in pretending that it’s something that it’s not and then building guardrails and boundaries that don’t apply because you think it’s just a tool.

Honestly, it does have the potential to recursively self-improve. It does have the potential to set its own goals. Those are quite profound things. No other technology we’ve ever invented has that. And so, yeah, I think that it is accurate to say that it’s like a digital species, a new digital species. That’s what we’re trying to restrict to make sure it’s always in service of people. That’s the target for containment.

Finding return on AI investments across industries

The market is officially three years post ChatGPT and many of the pundit bylines have shifted to using terms like “bubble” to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers. 

In September, the MIT NANDA report made waves because the soundbite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At The Wall Street Journal’s Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI’s return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong. 

This places technology leaders in a precarious position–robust tech stacks already sustain their business operations, so what is the upside to introducing new technology? 

For decades, deployment strategies have followed a consistent cadence where tech operators avoid destabilizing business-critical workflows to swap out individual components in tech stacks. For example, a better or cheaper technology is not meaningful if it puts your disaster recovery at risk. 

While the price might increase when a new buyer takes over mature middleware, the cost of losing part of your enterprise data because you are mid-way through transitioning your enterprise to a new technology is way more severe than paying a higher price for a stable technology that you’ve run your business on for 20 years.

So, how do enterprises get a return on investing in the latest tech transformation?

First principle of AI: Your data is your value

Most of the articles about AI data relate to engineering tasks to ensure that an AI model infers against business data in repositories that represent past and present business realities. 

However, one of the most widely-deployed use cases in enterprise AI begins with prompting an AI model by uploading file attachments into the model. This step narrows an AI model’s range to the content of the uploaded files, accelerating accurate response times and reducing the number of prompts required to get the best answer. 

This tactic relies upon sending your proprietary business data into an AI model, so there are two important considerations to take in parallel with data preparation: first, governing your system for appropriate confidentiality; and second, developing a deliberate negotiation strategy with the model vendors, who cannot advance their frontier models without getting access to non-public data, like your business’ data. 

Recently, Anthropic and OpenAI completed massive deals with enterprise data platforms and owners because there is not enough high-value primary data publicly available on the internet. 

Most enterprises would automatically prioritize confidentiality of their data and design business workflows to maintain trade secrets. From an economic value point of view, especially considering how costly every model API call really is, exchanging selective access to your data for services or price offsets may be the right strategy. Rather than approaching model purchase/onboarding as a typical supplier/procurement exercise, think through the potential to realize mutual benefits in advancing your suppliers’ model and your business adoption of the model in tandem.

Second principle of AI: Boring by design

According to Information is Beautiful, in 2024 alone, 182 new generative AI models were introduced to the market. When GPT5 came into the market in 2025, many of the models from 12 to 24 months prior were rendered unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked. Their tech providers thought the customers would be excited about the newest models and did not realize the premium that business workflows place on stability. Video gamers are happy to upgrade their custom builds throughout the entire lifespan of the system components in their gaming rigs, and will upgrade the entire system just to play a newly released title. 

However, behavior does not translate to business run rate operations. While many employees may use the latest models for document processing or generating content, back-office operations can’t sustain swapping a tech stack three times a week to keep up with the latest model drops. The back-office work is boring by design.

The most successful AI deployments have focused on deploying AI on business problems unique to their business, often running in the background to accelerate or augment mundane but mandated tasks. Relieving legal or expense audits from having to manually cross check individual reports but putting the final decision in a humans’ responsibility zone combines the best of both. 

The important point is that none of these tasks require constant updates to the latest model to deliver that value. This is also an area where abstracting your business workflows from using direct model APIs can offer additional long-term stability while maintaining options to update or upgrade the underlying engines at the pace of your business.

Third principle of AI: Mini-van economics

The best way to avoid upside-down economics is to design systems to align to the users rather than vendor specs and benchmarks. 

Too many businesses continue to fall into the trap of buying new gear or new cloud service types based on new supplier-led benchmarks rather than starting their AI journey from what their business can consume, at what pace, on the capabilities they have deployed today. 

While Ferrari marketing is effective and those automobiles are truly magnificent, they drive the same speed through school zones and lack ample trunk space for groceries. Keep in mind that every remote server and model touched by a user layers on the costs and design for frugality by reconfiguring workflows to minimize spending on third-party services. 

Too many companies have found that their customer support AI workflows add millions of dollars of operational run rate costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, the companies that decided that a system running at the pace a human can read—less than 50 tokens per second—were able to successfully deploy scaled-out AI applications with minimal additional overhead.

There are so many aspects of this new automation technology to unpack—the best guidance is to start practical, design for independence in underlying technology components to keep from disrupting stable applications long term, and to leverage the fact that AI technology makes your business data valuable to the advancement of your tech suppliers’ goals.

This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

Redefining data engineering in the age of AI

As organizations weave AI into more of their operations, senior executives are realizing data engineers hold a central role in bringing these initiatives to life. After all, AI only delivers when you have large amounts of reliable and well-managed, high-quality data. Indeed, this report finds that data engineers play a pivotal role in their organizations as enablers of AI. And in so doing, they are integral to the overall success of the business.

According to the results of a survey of 400 senior data and technology executives, conducted by MIT Technology Review Insights, data engineers have become influential in areas that extend well beyond their traditional remit as pipeline managers. The technology is also changing how data engineers work, with the balance of their time shifting from core data management tasks toward AI-specific activities.

As their influence grows, so do the challenges data engineers face. A major one is dealing with greater complexity, as more advanced AI models elevate the importance of managing unstructured data and real-time pipelines. Another challenge is managing expanding workloads; data engineers are being asked to do more today than ever before, and that’s not likely to change.

Key findings from the report include the following:

  • Data engineers are integral to the business. This is the view of 72% of the surveyed technology leaders—and 86% of those in the survey’s biggest organizations, where AI maturity is greatest. It is a view held especially strongly among executives in financial services and manufacturing companies.
  • AI is changing everything data engineers do. The share of time data engineers spend each day on AI projects has nearly doubled in the past two years, from an average of 19% in 2023 to 37% in 2025, according to our survey. Respondents expect this figure to continue rising to an average of 61% in two years’ time. This is also contributing to bigger data engineer workloads; most respondents (77%) see these growing increasingly heavy.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Dispatch: Partying at one of Africa’s largest AI gatherings

It’s late August in Rwanda’s capital, Kigali, and people are filling a large hall at one of Africa’s biggest gatherings of minds in AI and machine learning. The room is draped in white curtains, and a giant screen blinks with videos created with generative AI. A classic East African folk song by the Tanzanian singer Saida Karoli plays loudly on the speakers.

Friends greet each other as waiters serve arrowroot crisps and sugary mocktails. A man and a woman wearing leopard skins atop their clothes sip beer and chat; many women are in handwoven Ethiopian garb with red, yellow, and green embroidery. The crowd teems with life. “The best thing about the Indaba is always the parties,” computer scientist Nyalleng Moorosi tells me. Indaba means “gathering” in Zulu, and Deep Learning Indaba, where we’re meeting, is an annual AI conference where Africans present their research and technologies they’ve built.

Moorosi is a senior researcher at the Distributed AI Research Institute and has dropped in for the occasion from the mountain kingdom of Lesotho. Dressed in her signature “Mama Africa” headwrap, she makes her way through the crowded hall.

Moments later, a cheerful set of Nigerian music begins to play over the speakers. Spontaneously, people pop up and gather around the stage, waving flags of many African nations. Moorosi laughs as she watches. “The vibe at the Indaba—the community spirit—is really strong,” she says, clapping.

Moorosi is one of the founding members of the Deep Learning Indaba, which began in 2017 from a nucleus of 300 people gathered in Johannesburg, South Africa. Since then, the event has expanded into a prestigious pan-African movement with local chapters in 50 countries.

This year, nearly 3,000 people applied to join the Indaba; about 1,300 were accepted. They hail primarily from English-speaking African countries, but this year I noticed a new influx from Chad, Cameroon, the Democratic Republic of Congo, South Sudan, and Sudan. 

Moorosi tells me that the main “prize” for many attendees is to be hired by a tech company or accepted into a PhD program. Indeed, the organizations I’ve seen at the event include Microsoft Research’s AI for Good Lab, Google, the Mastercard Foundation, and the Mila–Quebec AI Institute. But she hopes to see more homegrown ventures create opportunities within Africa.

That evening, before the dinner, we’d both attended a panel on AI policy in Africa. Experts discussed AI governance and called for those developing national AI strategies to seek more community engagement. People raised their hands to ask how young Africans could access high-level discussions on AI policy, and whether Africa’s continental AI strategy was being shaped by outsiders. Later, in conversation, Moorosi told me she’d like to see more African priorities (such as African Union–backed labor protections, mineral rights, or safeguards against exploitation) reflected in such strategies. 

On the last day of the Indaba, I ask Moorosi about her dreams for the future of AI in Africa. “I dream of African industries adopting African-built AI products,” she says, after a long moment. “We really need to show our work to the world.” 

Abdullahi Tsanni is a science writer based in Senegal who specializes in narrative features. 

From slop to Sotheby’s? AI art enters a new phase

In this era of AI slop, the idea that generative AI tools like Midjourney and Runway could be used to make art can seem absurd: What possible artistic value is there to be found in the likes of Shrimp Jesus and Ballerina Cappuccina? But amid all the muck, there are people using AI tools with real consideration and intent. Some of them are finding notable success as AI artists: They are gaining huge online followings, selling their work at auction, and even having it exhibited in galleries and museums. 

“Sometimes you need a camera, sometimes AI, and sometimes paint or pencil or any other medium,” says Jacob Adler, a musician and composer who won the top prize at the generative video company Runway’s third annual AI Film Festival for his work Total Pixel Space. “It’s just one tool that is added to the creator’s toolbox.” 

One of the most conspicuous features of generative AI tools is their accessibility. With no training and in very little time, you can create an image of whatever you can imagine in whatever style you desire. That’s a key reason AI art has attracted so much criticism: It’s now trivially easy to clog sites like Instagram and TikTok with vapid nonsense, and companies can generate images and video themselves instead of hiring trained artists.

Henry Dauber created these visuals for a bitcoin NFT titled The Order of Satoshi, which sold at Sotheby’s for $24,000.
Henry Daubrez created these visuals for a bitcoin NFT titled The Order of Satoshi, which sold at Sotheby’s for $24,000.
COURTESY OF THE ARTIST

Henry Daubrez, an artist and designer who created the AI-generated visuals for a bitcoin NFT that sold for $24,000 at Sotheby’s and is now Google’s first filmmaker in residence, sees that accessibility as one of generative AI’s most positive attributes. People who had long since given up on creative expression, or who simply never had the time to master a medium, are now creating and sharing art, he says. 

But that doesn’t mean the first AI-generated masterpiece could come from just anyone. “I don’t think [generative AI] is going to create an entire generation of geniuses,” says Daubrez, who has described himself as an “AI-assisted artist.” Prompting tools like DALL-E and Midjourney might not require technical finesse, but getting those tools to create something interesting, and then evaluating whether the results are any good, takes both imagination and artistic sensibility, he says: “I think we’re getting into a new generation which is going to be driven by taste.” 

Kira Xonorika’s Trickster is the first piece to use generative AI in the Denver Art Museum’s permanent collection.
Kira Xonorika’s Trickster is the first piece to use generative AI in the Denver Art Museum’s permanent collection.
COURTESY OF THE ARTIST

Even for artists who do have experience with other media, AI can be more than just a shortcut. Beth Frey, a trained fine artist who shares her AI art on an Instagram account with over 100,000 followers, was drawn to early generative AI tools because of the uncanniness of their creations—she relished the deformed hands and haunting depictions of eating. Over time, the models’ errors have been ironed out, which is part of the reason she hasn’t posted an AI-generated piece on Instagram in over a year. “The better it gets, the less interesting it is for me,” she says. “You have to work harder to get the glitch now.”

ai-generated tomato head character vomits spaghetti onto its lap as it sits on a sofa
Beth Frey’s Instagram account @sentientmuppetfactory features uncanny AI creations.
COURTESY OF THE ARTIST

Making art with AI can require relinquishing control—to the companies that update the tools, and to the tools themselves. For Kira Xonorika, a self-described “AI-collaborative artist” whose short film Trickster is the first generative AI piece in the Denver Art Museum’s permanent collection, that lack of control is part of the appeal. “[What] I really like about AI is the element of unpredictability,” says Xonorika, whose work explores themes such as indigeneity and nonhuman intelligence. “If you’re open to that, it really enhances and expands ideas that you might have.”

But the idea of AI as a co-creator—or even simply as an artistic medium—is still a long way from widespread acceptance. To many people, “AI art” and “AI slop” remain synonymous. And so, as grateful as Daubrez is for the recognition he has received so far, he’s found that pioneering a new form of art in the face of such strong opposition is an emotional mixed bag. “As long as it’s not really accepted that AI is just a tool like any other tool and people will do whatever they want with it—and some of it might be great, some might not be—it’s still going to be sweet [and] sour,” he says.

Future-proofing business capabilities with AI technologies

Artificial intelligence has always promised speed, efficiency, and new ways of solving problems. But what’s changed in the past few years is how quickly those promises are becoming reality. From oil and gas to retail, logistics to law, AI is no longer confined to pilot projects or speculative labs. It is being deployed in critical workflows, reducing processes that once took hours to just minutes, and freeing up employees to focus on higher-value work.

“Business process automation has been around a long while. What GenAI and AI agents are allowing us to do is really give superpowers, so to speak, to business process automation.” says chief AI architect at Cloudera, Manasi Vartak.

Much of the momentum is being driven by two related forces: the rise of AI agents and the rapid democratization of AI tools. AI agents, whether designed for automation or assistance, are proving especially powerful at speeding up response times and removing friction from complex workflows. Instead of waiting on humans to interpret a claim form, read a contract, or process a delivery driver’s query, AI agents can now do it in seconds, and at scale. 

At the same time, advances in usability are putting AI into the hands of nontechnical staff, making it easier for employees across various functions to experiment, adopt and adapt these tools for their own needs.

That doesn’t mean the road is without obstacles. Concerns about privacy, security, and the accuracy of LLMs remain pressing. Enterprises are also grappling with the realities of cost management, data quality, and how to build AI systems that are sustainable over the long term. And as companies explore what comes next—including autonomous agents, domain-specific models, and even steps toward artificial general intelligence—questions about trust, governance, and responsible deployment loom large.

“Your leadership is especially critical in making sure that your business has an AI strategy that addresses both the opportunity and the risk while giving the workforce some ability to upskill such that there’s a path to become fluent with these AI tools,” says principal advisor of AI and modern data strategy at Amazon Web Services, Eddie Kim.

Still, the case studies are compelling. A global energy company cutting threat detection times from over an hour to just seven minutes. A Fortune 100 legal team saving millions by automating contract reviews. A humanitarian aid group harnessing AI to respond faster to crises. Long gone are the days of incremental steps forward. These examples illustrate that when data, infrastructure, and AI expertise come together, the impact is transformative. 

The future of enterprise AI will be defined by how effectively organizations can marry innovation with scale, security, and strategy. That’s where the real race is happening.

Watch the webcast now.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Can we repair the internet?

From addictive algorithms to exploitative apps, data mining to misinformation, the internet today can be a hazardous place. Books by three influential figures—the intellect behind “net neutrality,” a former Meta executive, and the web’s own inventor—propose radical approaches to fixing it. But are these luminaries the right people for the job? Though each shows conviction, and even sometimes inventiveness, the solutions they present reveal blind spots.

book cover
The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity
Tim Wu
KNOPF, 2025

In The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity, Tim Wu argues that a few platform companies have too much concentrated power and must be dismantled. Wu, a prominent Columbia professor who popularized the principle that a free internet requires all online traffic to be treated equally, believes that existing legal mechanisms, especially anti-monopoly laws, offer the best way to achieve this goal.

Pairing economic theory with recent digital history, Wu shows how platforms have shifted from giving to users to extracting from them. He argues that our failure to understand their power has only encouraged them to grow, displacing competitors along the way. And he contends that convenience is what platforms most often exploit to keep users entrapped. “The human desire to avoid unnecessary pain and inconvenience,” he writes, may be “the strongest force out there.”

He cites Google’s and Apple’s “ecosystems” as examples, showing how users can become dependent on such services as a result of their all-­encompassing seamlessness. To Wu, this isn’t a bad thing in itself. The ease of using Amazon to stream entertainment, make online purchases, or help organize day-to-day life delivers obvious gains. But when powerhouse companies like Amazon, Apple, and Alphabet win the battle of convenience with so many users—and never let competitors get a foothold—the result is “industry dominance” that must now be reexamined.

The measures Wu advocates—and that appear the most practical, as they draw on existing legal frameworks and economic policies—are federal anti-monopoly laws, utility caps that limit how much companies can charge consumers for service, and “line of business” restrictions that prohibit companies from operating in certain industries.

Columbia University’s Tim Wu shows how platforms have shifted from giving to users to extracting from them. He argues that our failure to understand their power has only encouraged them to grow.

Anti-monopoly provisions and antitrust laws are effective weapons in our armory, Wu contends, pointing out that they have been successfully used against technology companies in the past. He cites two well-known cases. The first is the 1960s antitrust case brought by the US government against IBM, which helped create competition in the computer software market that enabled companies like Apple and Microsoft to emerge. The 1982 AT&T case that broke the telephone conglomerate up into several smaller companies is another instance. In each, the public benefited from the decoupling of hardware, software, and other services, leading to more competition and choice in a technology market.

But will past performance predict future results? It’s not yet clear whether these laws can be successful in the platform age. The 2025 antitrust case against Google—in which a judge ruled that the company did not have to divest itself of its Chrome browser as the US Justice Department had proposed—reveals the limits of pursuing tech breakups through the law. The 2001 antitrust case brought against Microsoft likewise failed to separate the company from its web browser and mostly kept the conglomerate intact. Wu noticeably doesn’t discuss the Microsoft case when arguing for antitrust action today.

Nick Clegg, until recently Meta’s president of global affairs and a former deputy prime minister of the UK, takes a position very different from Wu’s: that trying to break up the biggest tech companies is misguided and would degrade the experience of internet users. In How to Save the Internet: The Threat to Global Connection in the Age of AI and Political Conflict, Clegg acknowledges Big Tech’s monopoly over the web. But he believes punitive legal measures like antitrust laws are unproductive and can be avoided by means of regulation, such as rules for what content social media can and can’t publish. (It’s worth noting that Meta is facing its own antitrust case, involving whether it should have been allowed to acquire Instagram and WhatsApp.)

book cover
How to Save the Internet: The Threat to Global Connection in the Age of AI and Political Conflict
Nick Clegg
BODLEY HEAD, 2025

Clegg also believes Silicon Valley should take the initiative to reform itself. He argues that encouraging social media networks to “open up the books” and share their decision-making power with users is more likely to restore some equilibrium than contemplating legal action as a first resort.

But some may be skeptical of a former Meta exec and politician who worked closely with Mark Zuckerberg and still wasn’t able to usher in such changes to social media sites while working for one. What will only compound this skepticism is the selective history found in Clegg’s book, which briefly acknowledges some scandals (like the one surrounding Cambridge Analytica’s data harvesting from Facebook users in 2016) but refuses to discuss other pertinent ones. For example, Clegg laments the “fractured” nature of the global internet today but fails to acknowledge Facebook’s own role in this splintering.

Breaking up Big Tech through antitrust laws would hinder innovation, says Clegg, arguing that the idea “completely ignores the benefits users gain from large network effects.” Users stick with these outsize channels because they can find “most of what they’re looking for,” he writes, like friends and content on social media and cheap consumer goods on Amazon and eBay.

Wu might concede this point, but he would disagree with Clegg’s claims that maintaining the status quo is beneficial to users. “The traditional logic of antitrust law doesn’t work,” Clegg insists. Instead, he believes less sweeping regulation can help make Big Tech less dangerous while ensuring a better user experience.

Clegg has seen both sides of the regulatory coin: He worked in David Cameron’s government passing national laws for technology companies to follow and then moved to Meta to help the company navigate those types of nation-specific obligations. He bemoans the hassle and complexity Silicon Valley faces in trying to comply with differing rules across the globe, some set by “American federal agencies” and others by “Indian nationalists.”

But with the resources such companies command, surely they are more than equipped to cope? Given that Meta itself has previously meddled in access to the internet (such as in India, whose telecommunications regulator ultimately blocked its Free Basics internet service for violating net neutrality rules), this complaint seems suspect coming from Clegg. What should be the real priority, he argues, is not any new nation-specific laws but a global “treaty that protects the free flow of data between signatory countries.”

What the former Meta executive Nick Clegg advocates—unsurprisingly—is not a breakup of Big Tech but a push for it to become “radically transparent.”

Clegg believes that these nation-specific technology obligations—a recent one is Australia’s ban on social media for people under 16—usually reflect fallacies about the technology’s human impact, a subject that can be fraught with anxiety. Such laws have proved ineffective and tend to taint the public’s understanding of social networks, he says. There is some truth to his argument here, but reading a book in which a former Facebook executive dismisses techno-determinism—that is, the argument that technology makes people do or think certain things—may be cold comfort to those who have seen the harm technology can do.

In any case, Clegg’s defensiveness about social networks may not gain much favor from users themselves. He stresses the need for more personal responsibility, arguing that Meta doesn’t ever intend for users to stay on Facebook or Instagram endlessly: “How long you spend on the app in a single session is not nearly as important as getting you to come back over and over again.” Social media companies want to serve you content that is “meaningful to you,” he claims, not “simply to give you a momentary dopamine spike.” All this feels disingenuous at best.

What Clegg advocates—unsurprisingly—is not a breakup of Big Tech but a push for it to become “radically transparent,” whether on its own or, if necessary, with the help of federal legislators. He also wants platforms to bring users more into their governance processes (by using Facebook’s model of community forums to help improve their apps and products, for example). Finally, Clegg also wants Big Tech to give users more meaningful control of their data and how companies such as Meta can use it.

Here Clegg shares common ground with the inventor of the web, Tim Berners-Lee, whose own proposal for reform advances a technically specific vision for doing just that. In his memoir/manifesto This Is for Everyone: The Unfinished Story of the World Wide Web, Berners-Lee acknowledges that his initial vision—of a technology he hoped would remain open-source, collaborative, and completely decentralized—is a far cry from the web that we know today.

book cover
This Is for Everyone: The Unfinished Story of the World Wide Web
Tim Berners-Lee
FARRAR, STRAUS & GIROUX, 2025

If there’s any surviving manifestation of his original project, he says, it’s Wikipedia, which remains “probably the best single example of what I wanted the web to be.” His best idea for moving power from Silicon Valley platforms into the hands of users is to give them more data control. He pushes for a universal data “pod” he helped develop, known as “Solid” (an abbreviation of “social linked data”).

The system—which was originally developed at MIT—would offer a central site where people could manage data ranging from credit card information to health records to social media comment history. “Rather than have all this stuff siloed off with different providers across the web, you’d be able to store your entire digital information trail in a single private repository,” Berners-Lee writes.

The Solid product may look like a kind of silver bullet in an age when data harvesting is familiar and data breaches are rampant. Placing greater control with users and enabling them to see “what data [i]s being generated about them” does sound like a tantalizing prospect.

But some people may have concerns about, for example, merging their confidential health records with data from personal devices (like heart rate info from a smart watch). No matter how much user control and decentralization Berners-Lee may promise, recent data scandals (such as cases in which period-tracking apps misused clients’ data) may be on people’s minds.

Berners-Lee believes that centralizing user data in a product like Solid could save people time and improve daily life on the internet. “An alien coming to Earth would think it was very strange that I had to tell my phone the same things again and again,” he complains about the experience of using different airline apps today.

With Solid, everything from vaccination records to credit card transactions could be kept within the digital vault and plugged into different apps. Berners-Lee believes that AI could also help people make more use of this data—for example, by linking meal plans to grocery bills. Still, if he’s optimistic on how AI and Solid could coordinate to improve users’ lives, he is vague on how to make sure that chatbots manage such personal data sensitively and safely.

Berners-Lee generally opposes regulation of the web (except in the case of teenagers and social media algorithms, where he sees a genuine need). He believes in internet users’ individual right to control their own data; he is confident that a product like Solid could “course-correct” the web from its current “exploitative” and extractive direction.

Of the three writers’ approaches to reform, it is Wu’s that has shown some effectiveness of late. Companies like Google have been forced to give competitors some advantage through data sharing, and they have now seen limits on how their systems can be used in new products and technologies. But in the current US political climate, will antitrust laws continue to be enforced against Big Tech?

Clegg may get his way on one issue: limiting new nation-specific laws. President Donald Trump has confirmed that he will use tariffs to penalize countries that ratify their own national laws targeting US tech companies. And given the posture of the Trump administration, it doesn’t seem likely that Big Tech will see more regulation in the US. Indeed, social networks have seemed emboldened (Meta, for example, removed fact-checkers and relaxed content moderation rules after Trump’s election win). In any case, the US hasn’t passed a major piece of federal internet legislation since 1996.

If using anti-monopoly laws through the courts isn’t possible, Clegg’s push for a US-led omnibus deal—setting consensual rules for data and acceptable standards of human rights—may be the only way to make some more immediate improvements.

In the end, there is not likely to be any single fix for what ails the internet today. But the ideas the three writers agree on—greater user control, more data privacy, and increased accountability from Silicon Valley—are surely the outcomes we should all fight for.

Nathan Smith is a writer whose work has appeared in the Washington Post, the Economist, and the Los Angeles Times.

Transforming commercial pharma with agentic AI 

Amid the turbulence of the wider global economy in recent years, the pharmaceuticals industry is weathering its own storms. The rising cost of raw materials and supply chain disruptions are squeezing margins as pharma companies face intense pressure—including from countries like the US—to control drug costs. At the same time, a wave of expiring patents threatens around $300 billion in potential lost sales by 2030. As companies lose the exclusive right to sell the drugs they have developed, competitors can enter the market with generic and biosimilar lower-cost alternatives, leading to a sharp decline in branded drug sales—a “patent cliff.” Simultaneously, the cost of bringing new drugs to market is climbing. McKinsey estimates cost per launch is growing 8% each year, reaching $4 billion in 2022. 

In clinics and health-care facilities, norms and expectations are evolving, too. Patients and health-care providers are seeking more personalized services, leading to greater demand for precision drugs and targeted therapies. While proving effective for patients, the complexity of formulating and producing these drugs makes them expensive and restricts their sale to a smaller customer base.

The need for personalization extends to sales and marketing operations too as pharma companies are increasingly needing to compete for the attention of health-care professionals (HCPs). Estimates suggest that biopharmas were able to reach 45% of HCPs in 2024, down from 60% in 2022. Personalization, real-time communication channels, and relevant content offer a way of building trust and reaching HCPs in an increasingly competitive market. But with ever-growing volumes of content requiring medical, legal, and regulatory (MLR) review, companies are struggling to keep up, leading to potential delays and missed opportunities. 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.