A new paradigm for managing data

Regeneron Pharmaceuticals, a biotechnology company that develops life-transforming medicines, found itself inundated with vast volumes of data during the peak of the covid-19 pandemic. In order to derive actionable information from these disparate data sets, which ranged from clinical trial data to real-time supply chain information, the company needed new ways to join and relate them, regardless of what format they were in or where they came from.

Stock graphic of an outward spiraling lines, dots, and rectangles

Shah Nawaz, chief technology officer and vice president of digital technology and engineering at Regeneron, says, “At the time, everybody in the world was reporting on their covid-19 findings from different countries and in different languages.” The challenge was how to make sense of these massive data sets in a timely manner, assisting researchers and clinicians, and ultimately getting the best treatments to patients faster. After all, he says, “when you’re dealing with large-scale data sets in hundreds, if not thousands, of locations, connecting the dots can be a complex problem.”

Regeneron isn’t the only company eager to derive more value from its data. Despite the enormous amounts of data they collect and the amount of capital they invest in data management solutions, business leaders are still not benefitting from their data. According to IDC research, 83% of CEOs want their organizations to be more data driven, but they struggle with the cultural and technological changes needed to execute an effective data strategy.

In response, many organizations, including Regeneron, are turning to a new form of data architecture as a modern approach to data management. In fact, by 2024, more than three-quarters of current data lake users will be investing in this type of hybrid “data lakehouse” architecture to enhance the value generated from their accumulated data, according to Matt Aslett, a research director with Ventana Research.

“Data lakehouse” is the term for a modern, open data architecture that combines the performance and optimization of a data warehouse with the flexibility of a data lake. But achieving the speed, performance, agility, optimization, and governance promised by this technology also requires embracing best practices that prioritize corporate goals and support enterprise-wide collaboration.

Download the report

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Entering the software economy

Many companies looking to enter the software economy, the ecosystem of companies that create or are enabled by software, do so through acquisitions, often by targeting startups. Evaluating the potential value of these smaller companies, however, is a specialized skill, says Jeff Vogel, head of the Software Strategy Group for EY-Parthenon. For companies, discovering and accounting for hidden talent and technology risks is a big factor in a successful merger or acquisition.

The acceleration of digital transformation and speed of customer demands is turning almost every business into a technology business. Creating, using, or selling technology is now a critical part of every enterprise. But how do companies add emerging technologies and innovations? 

“They need to believe in the market, that there’s room to grow in that market or room to expand the market; believe in the company’s ability to execute; or believe that they’re coming with a transformation thesis that they’re going to fundamentally change what the company does and how it does it in order to recognize their return,” says Vogel.

Non-technical companies often look to acquisitions, particularly of startups that are touting emerging technology, to make business processes run more efficiently. They also see software investments offering opportunities for high growth and generally high gross margins. But it’s important to gauge the risk and the reward of acquiring software, Vogel says. In the same way that entering the software economy can yield high growth, the market moves fast, making it easy to lose value just as easily as it is gained.

While there are always risks in business, Vogel says that one indicator of a strong acquisition is talent retention and culture. A lack of synergy between company cultures and poorly managed or deployed talent can pose barriers to a smooth acquisition and integration.

“Because software is an intangible IP and it’s very much tied to the people who build it and maintain it, if you have talent drains due to culture, compensation, or other things after an acquisition, that’s usually the leading indicator that the thesis is going to go up in smoke,” says Vogel.

This episode of Business Lab is produced in association with EY-Parthenon. Learn more about EY-Parthenon’s disruptive technology solutions at ey.com/us/disruptivetech.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is about acquiring emerging technologies. If it’s true that every company is becoming a technology company, then coming up with that technology can happen in many ways. Sometimes it’s homegrown, but other times acquiring technologies and startups is a frequent course of action, not just for the parent company but for funding innovation as well.

Two words for you: better building.

My guest is Jeff Vogel, head of the Software Strategy Group for EY-Parthenon.

This podcast is sponsored by EY-Parthenon.

Welcome, Jeff.

Jeff Vogel: Glad to be here.

Laurel: So, you’ve been working in private equity for years and have more than three decades of experience as an entrepreneur and an executive in software and technology. Can you paint a picture of the current software economy?

Jeff: Sure. So, I might start by defining what we mean by software economy. This term that we define really refers to software companies. So that’s pretty clear to people. Companies that sell or license software. That could be on-premises old-school software; could be more modern, SaaS-based (software-as-a-service) software. But then there’s a whole new slew of companies that people might think of as tech-enabled services. Services businesses that aren’t selling their licensing software. That are selling you some business or consumer service, but powering it with software.

So people obviously know of companies in marketing technology and online search, in recruiting, in transportation, that enable their services with software, but they’re not selling software. So those companies. Particularly if those companies differentiate on the software, so they’re not just using third-party, off-the-shelf software to deliver their service. But they have hundreds of software engineers, dozens or more patents, tens of millions of lines of code when they’re developing proprietary software that powers their business service. And it’s actually how they differentiate even though they’re not licensing software.

So, this collection of companies that are either selling software or selling business services that are enabled by software, that are attempting to differentiate on that software, is what we call the software economy. And these software economy companies share in common those things I mentioned. Lots of software engineers, lots of code, often intellectual property (IP), patents, and trade secrets behind that code. And attempting to differentiate by the way that technology manifests itself to customers or enables a business service to be different, more efficient, faster, better, sometimes cheaper.

Laurel: That’s very helpful to get a view of the entire ecosystem there. So at EY-Parthenon, you help private equity companies with technology acquisitions. As an industry, how does private equity work versus say a basic acquisition of one company that acquires another?

Jeff: Yeah, sure. Good question there. So, when you think about a traditional acquisition, let’s say one tech company buying another, there could be three or four families of theses driving that acquisition. It could be vertical integration. Buy one of our suppliers and integrate it and take a middleman out. It could be cross-sell and TAM (total addressable market) expansion. We want to buy something that’s adjacent to where we are and we’ll achieve synergy because we can cross-sell it through our customers, through our channels, through our salespeople, and vice versa. It could be new market entry. We want to enter a market and it’s going to take us too long and cost us too much money to do that organically. So we want to acquire into it. Or it could be some form of transformation. We’re trying to transform our company over a period of years from one type of business to another.

And those are all types of acquisitions that are done. You can usually put them into those three or four buckets. And then it follows that there’s often some synergy because of those theses. It could be revenue synergy by cross-sell. As a revenue synergy, we’re going to get more revenue than the two companies combined because of the ability to cross-sell. It could be a cost synergy: could be that we have redundant products and we don’t need both of them. We can make all the customers just as happy by eliminating the redundant capabilities and presumably having more efficient product development, product marketing, go-to market organizations.

And even when you don’t see those obvious synergies, typically if you’re doing anything at scale, there’s always back-office synergy. I don’t need two HR organizations, I don’t need two finance organizations, I don’t need two marketing communications organizations. There’s usually some synergy there. So, tech-on-tech or company-on-company, you can often think through with that lens.

Now, private equity, of course, where it’s just a financial buyer, a private equity firm buying a company, none of those theses that require two companies exist, at least in the initial acquisition. So when the private equity company first buys a tech company, they’re going to have a thesis that’s based on belief in the product and the company and their ability to achieve a return on investment on that. And private equity firms are, to be upper quartile, they’re looking for a 20% net IRR [net internal rate of return] over some period of time in order to do that. So, there’s a significant hurdle rate. They’re paying top dollar for these companies, yet they have to achieve top return.

So, they need to believe in the market, that there’s room to grow in that market or room to expand the market; believe in the company’s ability to execute; or believe that they’re coming with a transformation thesis that they’re going to fundamentally change what the company does and how it does it in order to recognize their return. So, there’s a pretty high bar and some of those synergies that M&A has are not available, at least in the first acquisition. Now, it’s pretty common that after that first acquisition, a private equity firm might develop a thesis that’s about acquiring more companies. And those subsequent acquisitions, some people call that a platform build or tuck-ins, might have a thesis that’s more in line with what the tech-on-tech examples illustrate.

Laurel: So interestingly, once one technology company is bought, then a portfolio could be possibly assumed, et cetera, and it paves a way for more investment.

Jeff: So since you mentioned that, often that’s becoming more prevalent today because the private equity firms are paying up and they’re just, buy the company, believe in the market. And the company thesis sometimes isn’t enough to get their return. They need to add scale and dollar-average down. In other words, if I’m paying 20 times EBITDA [earnings before interest, taxes, depreciation, and amortization] and seven times revenue for the first deal, and I know that it’s going to be hard to make my return on that, I may need to go find some tuck-ins and some other deals where I can start recognizing some of the synergies that strategics have available to them and bring that multiple down. That first deal might be done at those multiples. Maybe subsequent deals are done at 60% of those multiples and your average multiple winds up being somewhere in between. And that’s pretty common these days, particularly as firms are paying up for the initial platform.

Laurel: So what are some of those differences between evaluating mature companies and startups? Because that’s got to be some kind of specialized skill.

Jeff: Yeah. It’s interesting. So sometimes—there’s a little joke in the industry that the earlier stage you are, the easier it is to raise money. And one reason is there’s less to diligence. So that’s why diligence in the venture capital world looks very different than diligence in the private equity world. There’s actually less to diligence. There’s a little more of term sheets, quote “on the back of a napkin.” A lot of venture capital is relationship based; it’s believing in the team. Because one thing they teach you at venture capital school is the business plan that you invest in won’t be the one that a company is ultimately successful in. So, you’re really betting on the team, you’re betting on the team’s ability to pivot and navigate and find the eventual path. Because that first one for early-stage companies is probably not where they’re going to wind up being successful.

So, there’s not a lot to diligence and there is market risk and there is product risk. Those are two big risks that you take in early-stage investing. You move over into later stage and mature companies, market is probably defined, the competitive set is probably defined, and there’s probably a product that’s doing something because these companies have substantive revenue. Now there might be a next generation of the product, the product might be under competitive threat.

The product might need to be transformed. It might have what we call technical debt. It might have a re-architecture or a re-platforming. It might be an on-premises product that has to move to the cloud and become a SaaS [software-as-a-service] product. All of those are things that could be roadmap objectives of a company that you would want to diligence because they’re essentially expenses that you’re signing up for—things that the company has to do to maintain or improve its market position and its financial profile over time that you’re betting on. And you want to diligence those really well. We call this technical debt “off-balance-sheet liabilities.” It’s like deferred maintenance on a house.

So these mature products have lots of it. They’re not on the balance sheet so I can’t read the financial statement and say, “You owe that bank a million dollars.” But under the covers, in between the lines, there’s a body of technology and that technology needs tender loving care and maintenance just like a home or a building might. And in diligence you want to try to understand that. You want to try to understand the market needs. You want to try to understand the competitive set, and the competitive landscape, and the roadmap that the company has for navigating that. And see if that aligns with your management teams and your ability to execute.

So we like to say all these companies have risks and a lot of diligence is about aligning the risks that are there with those that you as a private equity firm are well positioned to undertake. In other words, some firms are willing to live with some financial risk or some product risk or some market risk or some talent risk. But other risks they’re like, “no, no, no, no, we don’t take market risk, but you have some talent risk, which we can help with because we’re great at recruiting and retaining talent.” So a lot of it isn’t that there’s no risks in the deal, but it’s understanding them, attempting to quantify them. And then culturally and DNA-wise, what are the types of risks that your firm is well-suited to taking on and aligns with the culture and DNA of the firm, versus what risks you just can’t touch?

And for some folks that are newer to tech—if you’ve been a private equity firm investing in industrials and now you’re coming into tech, there’s a lot of product and market risks because markets change quickly and products have to change quickly. And those might be risks that some of the newer firms investing in tech don’t take, as compared to some firms that have been around and getting used to software economies for the last 10 or 15 years and are better suited to understanding the disruption and opportunity that comes along with software investing.

Laurel: You’ve mentioned a little bit of this, of why non-technical companies would want to acquire emerging technology companies, integration product portfolios, cross-selling, et cetera. But what is the potential value of these acquisitions in terms of that innovation, profit, and talent?

Jeff: We see a lot of these non-tech companies trying to become more software enabled and software driven and enter the software economy. And that could be for really good reasons. That software is good for their customers. It makes some business process easier, faster, cheaper, smoother, higher quality, more automated. But it could also be for financial ones. Software enjoys relatively low friction to grow and enter, opportunity for high growth. People see these crazy growth companies in the software economy all the time, and in other sectors of the economy it’s hard to grow at those rates. High gross margins in software. Cost of goods is a pretty small percentage of revenue and what you sell products for. We have a lot of rule of 40 or 50 or 60 companies. If you don’t know what that is, rule of 40 is when you add together the growth rate of a company with the EBITDA margin of the company. And 40 used to be great. If you’re growing at 20% and delivering 20% EBITDA margins, that’s pretty good.

But we’re actually seeing rule of 50 and 60 companies in software today. So combining those two [figures], those are typically trade-offs. I can grow faster if I invest more of my profits, if I’m a little less profitable, or I can grow slower and have more profit. But when I can do both in a reasonable percentage and I’m rule of 40, 50, or 60, that’s pretty strong. And we see a lot of those companies in software, high multiples. People love that because it means higher exits and lower cost of capital when they’re raising money. We don’t require a lot of working capital, we don’t have a lot of factories, we don’t have a lot of inventory. So managing the balance sheet is a lot easier.

So a lot of people are jealous of the metrics and low friction and the acid light nature of the software economy and want to try to make their companies start looking like that. And that’s why we see a lot of these companies either transforming themselves or starting to acquire software companies and attempt to garner some of the benefits of being in the software economy.

Now, the other side of the coin, of course, is there’s always a little bit of “be careful what you wish for” because you come on over to the software economy, and what’s different over here? Well, you can go from zero to 100 pretty quickly. You can establish yourself. You could not be a company one year and be a major player and dominate the market six years later in multi-billion-dollar markets. And we’ve all seen that, particularly in Silicon Valley. But the other side of it is you can go from 100 to zero pretty darn quickly, and you’re seeing some of those companies play out today also.

So there is another side of the coin, and you have to have the stomach for it and you have to have the risk profile for it and you have to have the DNA for it and you have to have the talent for it. So it is somewhat different from running non-software economy businesses. Those are the reasons why we see these non-tech companies starting to acquire tech companies and enter the software economy.

Laurel: Just so everyone is clear, EBITDA is earnings before interest, taxes, depreciation, and amortization, but we’re talking about indicators and what makes a technology company a strong acquisition without a crystal ball, without knowing what those successful companies may be, the zero to 100 and beyond. What’s a good example of how companies can actually start looking at some indicators?

Jeff: Well, if you’re six, 12 months into it, things that I look for… Now, let’s say you’ve got a non-tech company acquiring a tech company or even a large tech company acquiring a small tech company. When you enter the software economy, there are a lot of things that are different. One of them is talent, the way people think, the types of people that you hire, the culture of these software economy companies. And the great sign is how many of the key people are staying around, and more importantly, what their roles are in the company.

So when you see companies acquired and the executives from the acquired companies start getting promoted and taking on larger roles in the acquiring organization, that’s hugely a sign that the cultures are aligning. The things that the acquired company brings to the table are valued by the acquirer, the cultures are integrating. The benefits, even if they take longer because of integration of products and technology and channels and markets, might take a little longer. But if you see the talent integrating in that way, I’d say that’s a pretty good sign. Because software is an intangible IP and it’s very much tied to the people who build it and maintain it. If you have talent drains due to culture, compensation, or other things after an acquisition, that’s usually the leading indicator that the thesis is going to go up in smoke. So that’s the first thing I look for.

Now, in a private equity deal you don’t quite see that, because the company is pretty much the company. In some cases, the only thing that changes is the board of directors, especially if a company was well run and a private equity firm wants to keep it that way, there may not be a lot of change and things may just go on as normal. The only thing that changes is the shareholders. But when it’s an operating company being acquired, talent is a good place to look for leading indicators.

Laurel: With a growing number of companies attracted to the technology landscape as you described, it seems like a crowded market. So how can a company differentiate itself to stay competitive and be discerning when looking for investments?

Jeff: Yeah. So I think getting those theses right. Just being a holding company and buying something is probably not the best approach, although there are holding company models out there. Doubling down on the strategy and the M&A, some people might call it an M&A thesis or the integration thesis. So let’s take examples. Vertical integration: If you’re going to vertically integrate or acquire a supplier, that could have significant synergy, could have significant differentiation. And if you take the time to put that strategy out, find the right companies to acquire that fit the thesis, and make sure you fund the integration. Integration is not just a bunch of rows on spreadsheets, but it’s actually getting on the ground, in the weeds, figuring out the operating models, people, the business processes, the tools that are needed to successfully integrate to see your thesis through. Those can be differentiating and those can be game changers for companies both in the marketplace and on the P&L.

Laurel: And you mentioned this earlier, which is the unknown-risk, high-reward aspect of acquiring technology companies, but the new capabilities and talents is something that a new company can offer. So what are the most common obstacles that companies face then?

Jeff: I touched on this before, it’ll be a little redundant, but I would say the first is you’re coming into the software economy, it’s new to you. Companies can go from zero to 100 pretty quickly, but they can go from 100 to zero. The landscape is littered with companies that were high-flyers, leaders in their space, that are now gone and out of business. Were basically acquired in fire sales and somebody’s running out the maintenance long tail on some of these companies. So you’ve seen that in old-school desktop publishing, you’ve seen that in old-school CRM and ERP, you’ve seen that in various vertical applications serving vertical businesses. All those sectors have had once-dominant players that didn’t innovate, maybe lost their key talent, maybe had an upside-down balance sheet, were over-leveraged, and basically disappeared and went off the map as quick as they came on.

Again, you can go from not being a company to being the high-flyer leader in the space of five, six, seven years and just as quickly, possibly more quickly, go to zero. So it’s really important that folks acquiring these companies are investing in them, understand that risk, and realize that sometimes drastic things have to be done to keep these companies growing and high-flying, even after you think they’ve reached their apex.

And then the other is, cultures don’t integrate. Again, touched on this before, talent is a key thing. Software economy companies tend to have different cultures than businesses from other parts of the economy. And it’s pretty important that that’s recognized and there are strategies for dealing with it or else the talent won’t be as innovative, will have high attrition risk because—I’ll leave and start a competitor. We’ve seen that play out. Company gets acquired, people run out their non-compete or their retention bonus for a year, then they all go and start another company, and that other company does it even better.

One thing you’ll find in software is the first guys to do it are usually not the winners. In fact, often you may not know the first guys or gals. The second time around is usually better. Why? Because you learn from your mistakes. Or better yet, you learn from someone else’s mistakes. You have a model to work from. The first time you’re designing a mobile phone, you’re the first guys, you got to figure it all out. The second time, you’re learning from the guys who got it like 60% right, but 40% wrong.

The time before, in the web browser space, it wasn’t the first guys who won. In the mobile phone space, it wasn’t the first guys who won. In the desktop computing space, it wasn’t the first guys who won. It’s usually the second or third. So that’s a pretty common theme and often people who were on those first teams that learned, and they go start the second teams. And if you have that talent and you let it walk out the door, shame on you.

So trying to be the ones that put yourself out of business versus letting your former employees figure out how to do it is always a good idea. And I think the best companies do that. They form teams, they give them some autonomy, and they say, “Can you go build the next generation of our product rather than a competitor? Go build it.” And then that’s how companies reinvent themselves and mitigate the risk of the talent culture or the innovation culture walking out the door or springing up somewhere else.

Laurel: It certainly helps to have that history as perspective now, but looking forward into the future, how will private equity help shape the technology landscape in the next few years?

Jeff: So I mean, look, it’s a little bit of the Wild West. Private equity has never been so dominant in tech. I mean it’s hard to believe, but if you go back 12, 13, 14 years, maybe even 10, there was almost no private equity investing in tech. Private equity firms didn’t understand tech; they didn’t understand all the things I mentioned. Why the high gross margins? Why the high growth rates? I’m scared of companies going from 100 to zero; I know they can go from zero to 100. I don’t understand all this intangible IP that I can’t touch and feel. It’s not in the factory, it’s not an inventory. There was very little investing in tech and then there were some deals done 10, 15 years ago that were the first tech deals, big take-privates. And then more firms got into it, and then some specialized firms started doing only tech. And now tech private equity is a big part of our economy and the capital markets.

Some numbers that might be interesting to people. Last year in 2021, there were 129 tech IPOs for $70 billion, and actually a small fraction of that in 2022—so far, only 19 deals for $1.6 billion because of the market corrections. And if we look at buyouts, there were almost as many—in 2021 there were 139 buyouts, actually a little more, for $50 billion. But in 2022, this market actually was so white-hot at the beginning of the year that there were 99 deals for $60 billion. So there were 80 more tech take-privates than there were IPOs in 2022. That represents 43% of the deals, by value in 2019 and by number, were in the tech economy.

So tech is dominating the capital markets and private equity and tech are becoming a substantive portion of the capital markets. More so, the drastic change has been on the private side, and people realize there are companies now that have gone private, public, private, public, private, public, bounce back and forth, because there are things you can do as a private company that you can’t do as a public company. The quarterly financials make it hard to do things like a SaaS transformation, to go from big upfront contracts to recurring revenue. Makes it hard to do big investments in new products, makes it hard to spend a lot of money on R&D [research and development], or a lot of money on R versus the D, development and maintenance. A lot of these are things that people find are easier to do as a private company, outside of having to report every quarter and disclose everything you’re doing to the public. Thus, you are seeing this cycle that private equity is just a pretty meaningful part of the capital markets for tech companies overall. And we’re doing bigger and bigger deals.

We worked on a $17 billion deal.And I think we’re going to see a lot more deals in that size neighborhood over the years to come. While private equity has been slow the last six months or so with the correction, when the public markets, interest rates going up, what have you, there’s a lot of pent-up demand. There’s still a lot of money on the sidelines in private equity that’s going to be invested when private equity pops back, which will likely happen at some point here in the first half of [20]23. It’s probably going to come back with a vengeance, and I think we’ll see the effect of private equity on the capital markets for tech companies be as significant as ever later in [20]23.

Laurel: Completely fascinating. Jeff, thank you so much for being here on the Business Lab today.

Jeff: Appreciate it. Thank you.

Laurel: That was Jeff Vogel, head of the Software Strategy Group for EY-Parthenon, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the global director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

Learn more about EY-Parthenon disruptive technology solutions at ey.com/us/disruptivetech.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The views expressed in this podcast are not necessarily the views of Ernst & Young LLP or other members of the global EY organization.

Welcome to the oldest part of the metaverse

Today’s headlines treat the metaverse as a hazy dream yet to be built, but if it’s defined as a network of virtual worlds we can inhabit, its oldest extant corner has been already running for 25 years. It’s a medieval fantasy kingdom created for the online role-playing game Ultima Online—and it has already endured a quarter-century of market competition, economic turmoil, and political strife. So what can this game and its players tell us about creating the virtual worlds of the future?

Ultima Online—UO to its fans—was not the first online fantasy game. As early as 1980, “multi-user dungeons,” known as MUDs, offered text-based role-playing adventures hosted on university computers connected via Arpanet. With the birth of the World Wide Web in 1991, a handful of graphical successors like Kingdom of Drakkar and Neverwinter Nights followed—allowing dozens or hundreds of players at a time to slay monsters together in a shared digital space. In 1996 the “massively multiplayer” genre was born, and titles such as Baram and Meridian 59 attracted tens of thousands of paying subscribers. 

But in 1997, Ultima transformed the industry with a revolutionary ambition: simulating an entire world. Instead of small, static environments that were mainly backdrops for combat, UO offered a vast, dynamic realm where players could interact with almost anything—fruit could be picked off trees, books could be taken off shelves and actually read. Unlike previous games where everyone was a heroic knight or wizard, Ultima realized a whole alternative society—with players taking on the roles of bakers, beggars, blacksmiths, pirates, and politicians. 

Perhaps most important, Ultima let people really live there. In most previous games, players occupied areas while logged in but had no persistent presence while offline. One, Furcadia, let users create customized mini-dimensions that temporarily connected to a shared space. But in UO, whatever things players built remained for others to interact with even when the player who had built them logged off. People could construct permanent cottages or castles anywhere there was open land and decorate them as they pleased. They could also form town governments or just have friends in to socialize over virtual ale and mutton. In short, it promised to be a place

This grand vision reflected the backgrounds of the development team at Origin Systems. Richard Garriott, its founder, had spent nearly two decades producing a series of single-player Ultima games that increasingly emphasized player freedom and complex moral choices. UO’s lead designer, Raph Koster, and most of its key programmers had cut their teeth on text-based MUDs—where the lack of computation-hungry graphics enabled servers to focus on deeper quantitative modeling than other games could attempt. A thriving circle of MUD hobbyists had been experimenting for years with complex simulations of things like agriculture, weather, and herbal medicine. 

Burning to apply such ideas on a massive scale, Koster and his wife, Kristen (also an Origin designer), devised an elaborate resource ecology system that would make Ultima’s game world come alive. Fields would grow grass. Herbivores would eat the grass. Carnivores would hunt the herbivores. Instead of just sitting around waiting to be killed by adventurers, dragons would seek to satisfy something like Maslow’s hierarchy of needs—first food, then shelter, and finally a lust for shiny treasure. This could foster truly inventive thinking. Rather than killing marauding monsters to protect a peaceful town, players could herd tasty deer into their path. In alpha testing, this worked well, and the team sensed that their careful plans and powerful simulation would give them substantial control over the ebb and flow of game play.  

The public beta test was a rude awakening. An unprecedented 50,000 people paid $5 each for early access to the game—and swarmed over the world like a plague of locusts, killing everything in sight. The rabbits didn’t live long enough to be hunted by wolves, and the dragons were slain long before anyone considered their motivations. It was ecological collapse. And with servers groaning under the weight of AI processes that were going unnoticed anyway, the team reluctantly tore out the whole system. As if to underscore the developers’ loss of control, near the end of the beta a player assassinated the king himself—Richard Garriott’s avatar, Lord British. 

When the full game went live in September ’97, tidal waves of players roamed the kingdom of Britannia, clicking on everything and using game mechanics in ways the Origin programmers had never anticipated. Soon, a group of murderous carpenters observed that wooden furniture could block the movement of other characters. They barricaded the gates of a major city with hundreds of tables and armoires, and ambushed anyone trying to escape. The victims appealed to Origin, but Raph Koster pushed for a solution that leaned harder into simulation. A patch was rushed out that let players solve the problem themselves: axes could now be used to chop up furniture.

Other misbehavior targeted weaknesses in the game engine itself, which were much harder to fix. Cunning miscreants nested thousands of objects in one place to create “black holes” that crashed the game. Some exploited UO’s lack of a gravity system to float on chairs into rivals’ houses and loot them clean.

Such failures, combined with extreme lag and numerous bugs, sparked widespread player outrage. But a strange thing happened. Instead of just quitting, as most people do when unsatisfied with a product, many stayed and fought for change. That November, a large crowd gathered in the capital, stripped as naked as their hard-coded loincloths would allow, and staged a drunken protest in Lord British’s castle. For Garriott, this level of passion for the game—even in the form of anger—was a remarkable validation. 

Cunning miscreants nested thousands of objects in one place to create “black holes” that crashed the game.

Yet it was quickly dawning on Origin that it was no longer merely a tech company. It was a government. And before long, that government presided over a population of more than 100,000 subscribers—larger than Charleston, South Carolina. Without the civic institutions that exist in real life, like school boards and labor unions, there were no outlets for players to express their wishes and feel heard. So Koster and the team set up “House of Commons” sessions where concerned citizens could chat directly with developers. The lobbying was fierce. Mages wanted spells to be stronger and swords to be weaker. Swordsmen wanted the opposite. There was no way to please everyone—no brilliant technical answer. The only path forward was the hard work of actual governance: communication, compromise, and transparency.

The most urgent policy question was what to do about murder. Garriott’s concept for Ultima Online stressed the freedom to role-play both good and evil, so the game enabled players to attack, rob, and kill each other. But the kingdom had turned into a slaughterhouse, with roving bands of powerful “player killers” butchering anyone who strayed outside the major cities—whose computer-controlled guards were invincible protectors in town but would ignore banditry even one step outside their jurisdiction. Although resurrection was possible, anything characters carried when they died could be stolen. So when curious new subscribers lost everything on their first trip into the woods, many logged off and never returned. 

Again, Koster sought to empower players through richer simulation—establishing a bounty system that let victims put prices on murderers’ heads. Undeterred, the outlaws treated the bounties list as a leaderboard. Several more rule changes followed, including a reputation system that tracked players’ actions and applied penalties to disincentivize killing. Yet players found numerous loopholes to torment each other in ways the software wouldn’t notice. 

A major challenge for the developers was figuring out what was actually happening in the first place.

In 2000, Garriott and Koster both left the company, and with subscriber attrition still severe, Origin opted for a drastic solution. It split the world into two mirror-­image realms—Felucca, where nonconsensual violence remained possible, and Trammel, where player-versus-player combat was strictly opt-in. The move remains bitterly controversial, with critics saying it eliminated the sense of peril that made UO unique. But users voted with their feet and their dollars. Almost immediately, the great majority of Britannians migrated to Trammel. And with players free to choose which experience they wanted, subscriptions swelled to 250,000.

Concurrent with the player-killing epidemic, an economic crisis had also been unfolding. The game’s resource system had initially been a closed loop, with fixed amounts of gold and raw materials available. Servers would generate such goods on assorted trolls, zombies, and lizardmen that would spawn in savage wildlands or deep in foul dungeons. By killing them, adventurers could claim this treasure. Resources that players consumed or gold they spent at AI-run shops would go back into an abstract pool that the server would draw from as new monsters spawned. This system broke down almost immediately, though, as players mindlessly hoarded everything they could get their hands on—preventing fresh treasure from appearing. But when Origin changed its policy and disconnected the loop, monster loot became a firehose of wealth into the economy, and hyperinflation followed. 

Sneak attack
When Ultima Online creator Richard Garriott forgot to reengage his avatar Lord British’s invulnerability setting during the game’s 1997 beta test, a player called Rainz assassinated him with a magic fire spell.
Mortal peril
Slaying a dragon is a worthy challenge, but the most dangerous foes are other players.

Holiday party
A large in-game gathering celebrated Christmas in 2002.
DIY
UO allows players to build fully customized homes, like this 2018 castle by Dot Warner.

On a new auction site called eBay, players were selling their in-game riches for real money. At first, one US dollar would get you about 200 Britannian gold pieces—making these fantasy coins more valuable than the Italian lira. About a year later, a dollar could buy more than 10,000 pieces of gold. With the market for virtual goods booming, “gold farming” became a big business in the real world, as entrepreneurs in China or Mexico hired locals to grind all day in the game for low wages. 

Another inflation source was “duping”—exploits that tricked the servers into duplicating items. Origin did its best to patch the bugs and delete dupes, but enough got into circulation to keep gold prices in free fall. When some customer service “Game Masters” were found to be corruptly colluding with players, live producer Rich Vogel stood up an internal affairs unit to watch the watchers.

A major challenge for the developers was figuring out what was actually happening in the first place. Real-world governments need enormous bureaucracies to gather information about their economies. One might guess this wouldn’t be an issue in virtual worlds, where everything is literally made out of information. But it is. At launch, most player wealth statistics were buried inaccessibly in the binary of the server backup files. Without comprehensive gold metrics, Raph Koster resorted to tracking inflation via eBay prices. It took many frantic months to build analytics tools and integrate them into dashboards that could inform decision-making.

As the picture clarified, Origin realized it needed better “gold sinks”—mechanisms to fight inflation by pulling gold out of UO’s economy. Taxing hoarded wealth would have caused a subscriber revolt. Selling rich characters godlike weapons might have sucked up enough gold to solve inflation, but it would’ve created a class of invincible terminators and wrecked game balance. 

The solution was ingenious: purely cosmetic status symbols. For the price of a small castle, Britannia’s elite could buy neon hair dye and impress commoners with a violently green mohawk. These measures, though, offered only a Band-Aid—by 2010, gold was at 500,000 per dollar.

By this time, competitors like World of Warcraft had lured away a majority ofUO’s players. But while most of its peers have shut down, Ultima Online has stabilized and maintains a sturdy core of users—perhaps around 20,000—even a quarter-century after its debut. What’s kept them? 

Current subscribers say the sense of identity and investment UO offers is unrivaled. Thanks in part to gold sinks and expansion content, it far surpasses even contemporarty titles in options for customizing costumes and housing. As a result, the game’s original Renaissance-fair aesthetic has drifted to something weirder. Traveling the land today, you’ll see gargoyle-­men wearing sunglasses, and ninjas in fluorescent armor riding giant spiders. Quaint medieval villages have given way to tracts of garish McMansions. But even if this riotous mishmash breaks the verisimilitude for players, it’s all theirs

It is impossible for designers to foresee all the ways users can break a system.

Yet the greatest factor keeping the community alive is the relationships and memories they’ve built together. Yes, other games have better graphics and flashier features. But where else can a friend who lives continents away in the offline world drop over for reaper fish pie and admire the rare painting you pilfered together during the Clinton administration? 

Often, these attachments are intensely personal—quite a few players had built virtual homes with parents or friends who later died in real life, and maintaining them is a way to feel connected to people they’ve lost. Some met their real-life spouses on late-night dungeon crawls. In sum, Britannia has truly become a place, and people stay for all the reasons we cherish real-world places. 

The nostalgia is so strong that some Ultima diehards have reverse-engineered the source code and set up free bootleg servers touting a “pure” experience that recaptures the spirit of the game’s early days. Thousands of former players have flocked to them. One fan-made service lets people play via web browsers. Another project aims to incorporate UO into virtual reality. 

As metaverse technologies make such worlds ever more accessible, it’s easy to imagine Britannia someday being a sort of pilgrimage site—where the brightest promise of simulated worlds first flowered, and where their toughest pitfalls were first overcome. Those building the next generation of those worlds would do well to learn the lessons of Ultima Online. 

For one, as Origin discovered, it is impossible for designers to foresee all the ways users can break a system—keeping things running is an endless war that requires flexible improvisation. Giving people more freedom makes this task even harder, but it also promotes the sense of investment that lets them put down roots.

Further, when users inhabit a virtual world, their relationship with its creators is fundamentally political. It is tempting to believe that the community’s problems can be solved with innovative engineering alone, but no clever algorithm can avert the need for wise governance. Just as in real-world policy, citizens respond to incentives, and antisocial behavior is hard to curb without unintended consequences.

Ultimately, it is human connections that sustain these worlds, not technological bells and whistles. It takes humility for developers to recognize that the content they produce is not the core of the experience. So when those pilgrims arrive in Britannia, we should expect that many of its founding citizens will still be there to welcome them. 

John-Clark Levin is an author and journalist at the intersection of technology, security, and policy. 

Three ways networking services simplify network management

Organizations rely on networks to power their work. But managing the myriad applications and data that a business depends on is not without its challenges. That’s where networking services come in. Think of networking services—like Azure Networking Services—as technology’s orchestra conductor.

Instead of closely studying sheet music, understanding the skills of dozens of musicians, and setting the tempo during rehearsals and performances, networking services track all the data and applications a company is using on their chosen network and coordinate their traffic across cloud boundaries—even as most networks’ bandwidth and scale continue to increase, due to ever more complex rendering and compute requirements.

Networking services can ease network management in nearly every industry. That’s important because highly functioning networks are adding tremendous value to organizations. Three technology benefits illustrate this exciting promise—and showcase how networking services support them. These benefits also address concerns that are top of mind for many organizations—provisioning the growing remote workforce, making use of the data they collect, and boosting network security.

#1: Giving remote workers secure resource access

If a network is an orchestra, the performance hall is getting mighty crowded—in large part because of the swelling remote workforce. Experts predict that the remote workforce will keep growing in 2023—after reaching 25% of all professional jobs in North America at the end of 2022. Employees want flexibility to work from anywhere, while hybrid work is expanding the attack surface.

As a result, organizations must secure access to resources and satisfy increasingly complex regulations. Among the organizations that have moved to a fully remote or hybrid workforce are government agencies, which must satisfy some of the most restrictive regulations to protect sensitive data. Remote work requirements now range from traditional office productivity tools to very complex software and system requirement for applications such as media rendering and CAD.

It’s critical that remote and hybrid workers be able to access the underlying compute resources their work relies on. For many companies, these are cloud-based virtual machines, such as Azure general purpose virtual machines or specialized compute virtual machines that run on NVIDIA GPUs. With networking services, companies can use universal secure connectivity to give even their remote workforce access to their on-premises and cloud resources from anywhere.

App delivery services can ensure that those remote workers have the resources they need to complete tasks, while monitoring services give IT a comprehensive view of network resources and diagnostics, with telemetry data to keep everyone working without interruption. Other technology solutions enable employees, vendors, and partners to access internal and cloud apps.

Equipping remote workers with the right tools—including network connectivity tools and security tools—will become increasingly important because the number of mobile workers is expected to grow from 78.5 million in 2020 to 93.5 million in 2024, according to an IDC forecast. These network users—sometimes called “deskless workers”—will make up nearly 60% of the US workforce by late 2024.

All those workers need devices to connect. The number of 5G connections is predicted to rise to 1 billion worldwide by mid-2023 and 2.6 billion in 2025, according to a CCS Insight study. The demographic trend of “productivity paranoia,” where workers are eager to prove they can be productive from anywhere, also will contribute to new networks and new devices that need to be connected and secured.

Other potential remote work scenarios that could benefit from networking services include the following:

  • Connected field service in manufacturing through a combination of IoT diagnostics, scheduling, asset maintenance, and inventory optimization​.
  • Geospatial analytics to help energy sector companies gain deeper insights around key decisions in a scalable, cost-efficient manner.

#2: Maximizing the value of edge intelligence

Edge devices—everything from car systems to temperature sensors on the manufacturing floor—are collecting more data than ever. Collecting all that data takes a strong network. Connecting all that data so you can derive business intelligence from it takes networking services.

With edge devices proliferating all the time, the challenge may seem impossible—not unlike asking a conductor to manage an orchestra in which the musicians swap instruments every few minutes. Services like Azure Traffic Manager can help by routing traffic based on priority, geography, and performance.

With networking services, companies can save money on troubleshooting issues, increase staff productivity, and meet safety and compliance requirements. In the automotive industry, for instance, networking services capabilities can lead to connected car solutions and mission-critical real-time insights.

Other popular edge intelligence use cases include gaining situational awareness of what’s happening on oil and gas offshore rigs and creating smart building solutions in real estate.

#3: Tightening security to better protect people and data

Traditional security measures have been stretched to the breaking point by increasingly sophisticated attacks. Imagine someone who dons a tuxedo so they can masquerade as an orchestra member, sneak on stage, and stomp on instruments.

Security solutions like Azure Network Security can secure both apps and network infrastructure, automate network attack alerts, boost security at the edge, increase app availability and performance, and protect the network from common attacks. Employing these networking security solutions can enhance protection across industries:

  • Healthcare providers can better protect patient privacy while securely accessing data from Internet of Things devices, driving improved patient outcomes. And as virtual healthcare visits grow more common, healthcare workers can benefit from networking services as remote workers too.
  • Financial services employees can track financial transactions across the banking credit card network, preventing fraud with AI-based predictive risk modeling.
  • Retail workers can use advanced video surveillance and analytics capabilities to monitor storefront events, offer frictionless checkout, and optimize stock replenishment.

How to choose networking services

When choosing a networking services solution, look for the equivalent of an accomplished, top-tier conductor who’s led some of the most celebrated orchestras in the world. Prioritize a collection of services that make it easy to do all of the following:

  • Migrate apps and workloads
  • Easily detect anomalies
  • Optimize for superior performance
  • Increase the security and resiliency of operations

And just as conductors have different specialties—perhaps in leading early music, or standard classical repertoire, or jazz combos—networking services break down into specialty areas, which can be mapped to your organization’s needs. Four major networking services categories are connectivity, network security, application delivery, and network monitoring.

Networking services can have a tremendous impact on an organization. Among the exciting possibilities, the right networking services solution can ease the complexity of remote work, maximize the value of edge intelligence, and tighten security to better protect people and data. For organizations with Azure, Azure Networking offers many capabilities that can be used separately or together. When it all comes together, your network will run as smoothly as a perfectly performed concerto.

For more information on how your complex remote work scenarios can be supported with NVIDIA GPUs to move your business forward, register for the virtual NVIDIA GTC March 22–25 event.

These simple design rules could turn the chip industry on its head

RISC-V is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

Python, Java, C++, R. In the seven decades or so since the computer was invented, humans have devised many programming languages—largely mishmashes of English words and mathematical symbols—to command transistors to do our bidding. 

But the silicon switches in your laptop’s central processor don’t inherently understand the word “for” or the symbol “=.” For a chip to execute your Python code, software must translate these words and symbols into instructions a chip can use.  

Engineers designate specific binary sequences to prompt the hardware to perform certain actions. The code “100000,” for example, could order a chip to add two numbers, while the code “100100” could ask it to copy a piece of data. These binary sequences form the chip’s fundamental vocabulary, known as the computer’s instruction set. 

For years, the chip industry has relied on a variety of proprietary instruction sets. Two major types dominate the market today: x86, which is used by Intel and AMD, and Arm, made by the company of the same name. Companies must license these instruction sets—which can cost millions of dollars for a single design. And because x86 and Arm chips speak different languages, software developers must make a version of the same app to suit each instruction set. 

Lately, though, many hardware and software companies worldwide have begun to converge around a publicly available instruction set known as RISC-V. It’s a shift that could radically change the chip industry. RISC-V proponents say that this instruction set makes computer chip design more accessible to smaller companies and budding entrepreneurs by liberating them from costly licensing fees. 

“There are already billions of RISC-V-based cores out there, in everything from earbuds all the way up to cloud servers,” says Mark Himelstein, the CTO of RISC-V International, a nonprofit supporting the technology. 

In February 2022, Intel itself pledged $1 billion to develop the RISC-V ecosystem, along with other priorities. While Himelstein predicts it will take a few years before RISC-V chips are widespread among personal computers, the first laptop with a RISC-V chip, the Roma by Xcalibyte and DeepComputing, became available in June for pre-order.

What is RISC-V?

You can think of RISC-V (pronounced “risk five”) as a set of design norms, like Bluetooth, for computer chips. It’s known as an “open standard.” That means anyone—you, me, Intel—can participate in the development of those standards. In addition, anyone can design a computer chip based on RISC-V’s instruction set. Those chips would then be able to execute any software designed for RISC-V. (Note that technology based on an “open standard” differs from “open-source” technology. An open standard typically designates technology specifications, whereas “open source” generally refers to software whose source code is freely available for reference and use.)

A group of computer scientists at UC Berkeley developed the basis for RISC-V in 2010 as a teaching tool for chip design. Proprietary central processing units (CPUs) were too complicated and opaque for students to learn from. RISC-V’s creators made the instruction set public and soon found themselves fielding questions about it. By 2015, a group of academic institutions and companies, including Google and IBM, founded RISC-V International to standardize the instruction set. 

The most basic version of RISC-V consists of just 47 instructions, such as commands to load a number from memory and to add numbers together. However, RISC-V also offers more instructions, known as extensions, making it possible to add features such as vector math for running AI algorithms. 

With RISC-V, you can design a chip’s instruction set to fit your needs, which “gives the freedom to do custom, application-driven hardware,” says Eric Mejdrich of Imec, a research institute in Belgium that focuses on nanoelectronics.

Previously, companies seeking CPUs generally bought off-the-shelf chips because it was too expensive and time-consuming to design them from scratch. Particularly for simpler devices such as alarms or kitchen appliances, these chips often had extra features, which could slow the appliance’s function or waste power. 

Himelstein touts Bluetrum, an earbud company based in China, as a RISC-V success story. Earbuds don’t require much computing capability, and the company found it could design simple chips that use RISC-V instructions. “If they had not used RISC-V, either they would have had to buy a commercial chip with a lot more [capability] than they wanted, or they would have had to design their own chip or instruction set,” says Himelstein. “They didn’t want either of those.”

RISC-V helps to “lower the barrier of entry” to chip design, says Mejdrich. RISC-V proponents offer public workshops on how to build a CPU based on RISC-V. And people who design their own RISC-V chips can now submit those designs to be manufactured free of cost via a partnership between Google, semiconductor manufacturer SkyWater, and chip design platform Efabless. 

What’s next for RISC-V

Balaji Baktha, the CEO of Bay Area–based startup Ventana Micro Systems, designs chips based on RISC-V for data centers. He says design improvements they’ve made—possible only because of the flexibility that an open standard affords—have allowed these chips to perform calculations more quickly with less energy. In 2021, data centers accounted for about 1% of total electricity consumed worldwide, and that figure has been rising over the past several years, according to the International Energy Agency. RISC-V chips could help lower that footprint significantly, according to Baktha.

However, Intel and Arm’s chips remain popular, and it’s not yet clear whether RISC-V designs will supersede them. Companies need to convert existing software to be RISC-V compatible (the Roma supports most versions of Linux, the operating system released in the 1990s that helped drive the open-source revolution). And RISC-V users will need to watch out for developments that “bifurcate the ecosystem,” says Mejdrich—for example, if somebody develops a version of RISC-V that becomes popular but is incompatible with software designed for the original.

RISC-V International must also contend with geopolitical tensions that are at odds with the nonprofit’s open philosophy. Originally based in the US, they faced criticism from lawmakers that RISC-V could cause the US to lose its edge in the semiconductor industry and make Chinese companies more competitive. To dodge these tensions, the nonprofit relocated to Switzerland in 2020. 

Looking ahead, Himelstein says the movement will draw inspiration from Linux. The hope is that RISC-V will make it possible for more people to bring their ideas for novel technologies to life. “In the end, you’re going to see much more innovative products,” he says. 

Sophia Chen is a science journalist based in Columbus, Ohio, who covers physics and computing. In 2022, she was the science communicator in residence at the Simons Institute for the Theory of Computing at the University of California, Berkeley.

These simple design rules could turn the chip industry on its head

RISC-V is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

Python, Java, C++, R. In the seven decades or so since the computer was invented, humans have devised many programming languages—largely mishmashes of English words and mathematical symbols—to command transistors to do our bidding. 

But the silicon switches in your laptop’s central processor don’t inherently understand the word “for” or the symbol “=.” For a chip to execute your Python code, software must translate these words and symbols into instructions a chip can use.  

Engineers designate specific binary sequences to prompt the hardware to perform certain actions. The code “100000,” for example, could order a chip to add two numbers, while the code “100100” could ask it to copy a piece of data. These binary sequences form the chip’s fundamental vocabulary, known as the computer’s instruction set. 

For years, the chip industry has relied on a variety of proprietary instruction sets. Two major types dominate the market today: x86, which is used by Intel and AMD, and Arm, made by the company of the same name. Companies must license these instruction sets—which can cost millions of dollars for a single design. And because x86 and Arm chips speak different languages, software developers must make a version of the same app to suit each instruction set. 

Lately, though, many hardware and software companies worldwide have begun to converge around a publicly available instruction set known as RISC-V. It’s a shift that could radically change the chip industry. RISC-V proponents say that this instruction set makes computer chip design more accessible to smaller companies and budding entrepreneurs by liberating them from costly licensing fees. 

“There are already billions of RISC-V-based cores out there, in everything from earbuds all the way up to cloud servers,” says Mark Himelstein, the CTO of RISC-V International, a nonprofit supporting the technology. 

In February 2022, Intel itself pledged $1 billion to develop the RISC-V ecosystem, along with other priorities. While Himelstein predicts it will take a few years before RISC-V chips are widespread among personal computers, the first laptop with a RISC-V chip, the Roma by Xcalibyte and DeepComputing, became available in June for pre-order.

What is RISC-V?

You can think of RISC-V (pronounced “risk five”) as a set of design norms, like Bluetooth, for computer chips. It’s known as an “open standard.” That means anyone—you, me, Intel—can participate in the development of those standards. In addition, anyone can design a computer chip based on RISC-V’s instruction set. Those chips would then be able to execute any software designed for RISC-V. (Note that technology based on an “open standard” differs from “open-source” technology. An open standard typically designates technology specifications, whereas “open source” generally refers to software whose source code is freely available for reference and use.)

A group of computer scientists at UC Berkeley developed the basis for RISC-V in 2010 as a teaching tool for chip design. Proprietary central processing units (CPUs) were too complicated and opaque for students to learn from. RISC-V’s creators made the instruction set public and soon found themselves fielding questions about it. By 2015, a group of academic institutions and companies, including Google and IBM, founded RISC-V International to standardize the instruction set. 

The most basic version of RISC-V consists of just 47 instructions, such as commands to load a number from memory and to add numbers together. However, RISC-V also offers more instructions, known as extensions, making it possible to add features such as vector math for running AI algorithms. 

With RISC-V, you can design a chip’s instruction set to fit your needs, which “gives the freedom to do custom, application-driven hardware,” says Eric Mejdrich of Imec, a research institute in Belgium that focuses on nanoelectronics.

Previously, companies seeking CPUs generally bought off-the-shelf chips because it was too expensive and time-consuming to design them from scratch. Particularly for simpler devices such as alarms or kitchen appliances, these chips often had extra features, which could slow the appliance’s function or waste power. 

Himelstein touts Bluetrum, an earbud company based in China, as a RISC-V success story. Earbuds don’t require much computing capability, and the company found it could design simple chips that use RISC-V instructions. “If they had not used RISC-V, either they would have had to buy a commercial chip with a lot more [capability] than they wanted, or they would have had to design their own chip or instruction set,” says Himelstein. “They didn’t want either of those.”

RISC-V helps to “lower the barrier of entry” to chip design, says Mejdrich. RISC-V proponents offer public workshops on how to build a CPU based on RISC-V. And people who design their own RISC-V chips can now submit those designs to be manufactured free of cost via a partnership between Google, semiconductor manufacturer SkyWater, and chip design platform Efabless. 

What’s next for RISC-V

Balaji Baktha, the CEO of Bay Area–based startup Ventana Micro Systems, designs chips based on RISC-V for data centers. He says design improvements they’ve made—possible only because of the flexibility that an open standard affords—have allowed these chips to perform calculations more quickly with less energy. In 2021, data centers accounted for about 1% of total electricity consumed worldwide, and that figure has been rising over the past several years, according to the International Energy Agency. RISC-V chips could help lower that footprint significantly, according to Baktha.

However, Intel and Arm’s chips remain popular, and it’s not yet clear whether RISC-V designs will supersede them. Companies need to convert existing software to be RISC-V compatible (the Roma supports most versions of Linux, the operating system released in the 1990s that helped drive the open-source revolution). And RISC-V users will need to watch out for developments that “bifurcate the ecosystem,” says Mejdrich—for example, if somebody develops a version of RISC-V that becomes popular but is incompatible with software designed for the original.

RISC-V International must also contend with geopolitical tensions that are at odds with the nonprofit’s open philosophy. Originally based in the US, they faced criticism from lawmakers that RISC-V could cause the US to lose its edge in the semiconductor industry and make Chinese companies more competitive. To dodge these tensions, the nonprofit relocated to Switzerland in 2020. 

Looking ahead, Himelstein says the movement will draw inspiration from Linux. The hope is that RISC-V will make it possible for more people to bring their ideas for novel technologies to life. “In the end, you’re going to see much more innovative products,” he says. 

Sophia Chen is a science journalist based in Columbus, Ohio, who covers physics and computing. In 2022, she was the science communicator in residence at the Simons Institute for the Theory of Computing at the University of California, Berkeley.

Modern data architectures fuel innovation

Companies have contended with a deluge of data for years. And while most have not yet found a good way of managing it all, the challenges—diverse data sources, types, and structures and new environments and platforms—have grown ever more complex. At the same time, deriving value from data has become a business imperative, making the consequences of not managing your organization’s data more severe—from lack of critical business insights to the hobbling of AI implementations.

Greater data complexity leads to greater consequences

Data is not only increasing in volume, velocity, and variety, but also the data estate has become increasingly intricate. For years, organizations have struggled with data being sequestered in separate silos within the company. Today, data location adds another layer of complexity, with some of the data on premises, some of it in the cloud, and some of it coming in streams from the edge. By 2025, more than 50% of enterprise-critical data will be created and processed outside the data center or cloud, Gartner analysts estimate. In order to be truly data-driven, organizations realize, they must reach both wider and deeper into their operations, identifying and digesting data and information from various departments and sources.

“Each line of business is driving digital transformation in its own way,” says Naveen Kamat, executive director and CTO of data and AI services at Kyndryl, an IT infrastructure services provider. “They are setting up their own apps in the cloud, which generate data daily. Then there’s web and social media data coming in. The enterprise data estate is becoming much, much bigger; it’s becoming much more complex to manage.”

The insurance industry provides an example of today’s data landscape complexity. One substantial challenge to good data management in insurance is a plethora of legacy systems built up over the years, says Ali Shahkarami, chief data officer at Allianz Global Corporate & Specialty (AGCS). “That’s especially true for international companies operating across borders with different products, regulatory requirements, and reporting requirements,” he notes. “The ability to do that centrally and in a consistent manner is a big challenge. It impacts everything you build with data and analytics.”

Unfortunately, while data management has become more challenging, data management skills have become harder to come by. The number of skilled data personnel has stayed the same or even dropped over the last decade, even as the number of data and application silos have increased, says Gartner. That means it takes more time than ever to meet integrated data analytics needs.

The consequences for organizations that fail to manage their data effectively and efficiently are becoming dire. For one thing, the cost of inadequate data management is growing. The cost of poor data can be about 20% of revenue, estimated Thomas C. Redman, president of consultancy Data Quality Solutions, in a co-authored MIT Sloan Management Review article.

“Almost all work is plagued by bad data,” write Redman and Thomas H. Davenport. “The salesperson who corrects errors in data received from marketing, the data scientist who spends 80% of his or her time wrangling data, the finance team that spends three-quarters of its time reconciling reports, the decision maker who doesn’t believe the numbers and instructs his or her staff to validate them.”

Redman and Davenport estimate that less than 5% of companies use their data and data science to gain a competitive edge. “Companies are not seizing the strategic potential in their data,” they conclude.

When it comes to implementing advanced technologies, such as machine learning and artificial intelligence, inadequate data management represents a substantial barrier. Not only could AI programs be ineffective, but “without the right data, building AI is risky and possibly dangerous” if data bias, diversity, and systematic labeling are not part of a data management strategy, says Rita Sallam, distinguished vice president and analyst at Gartner.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Chinese chips will keep powering your everyday life

China Report is MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday.

What’s better to do at this time than to indulge in some predictions for 2023? This morning, I published a story in MIT Technology Review’s “What’s Next in Tech” series, looking at what will happen in the global semiconductor industry this year. 

To give you a brief overview, I was told by many experts that the already-stressed global chips supply chain will be challenged even more by geopolitics in 2023

Over much of 2022, the US started to take steps to freeze China out of the industry—even forming an alliance with the Netherlands and Japan to restrict chip exports to the country. The measures have pushed the once market-driven business to come up with contingency plans to survive the cold-war-like environment—like diversifying from the Chinese supply chain and building factories elsewhere. We may see more similar plans announced in the next year. And at the same time, the US government’s punitive restrictions will start to be enforced and industrial subsidies for domestic chip makers will start to be doled out, meaning new companies may end up on top while others may get penalized for still selling to China.

To learn more about how the US, China, Taiwan, and Europe may navigate the industry this year, read the full article here.

But I also want to highlight something that didn’t make it into the story—a rather unintended outcome of the chip tech blockade. While the high-end sector of China’s chip industry suffers, the country may take a bigger role in manufacturing older-generation chips that are still widely used in everyday life. 

That may sound counterintuitive. Weren’t the US restrictions last year meant to severely hurt China’s semiconductor industry? 

Yes, but the US government has been intentional about limiting the impact to advanced chips. For example, in the realm of logic chips—those that perform tasks, as opposed to storing data—the US rules only limit China’s ability to produce chips with 14-nanometer nodes or better, which is basically the chip-making technology introduced in the last eight years. The restrictions don’t apply to producing chips with older technologies. 

The consideration here is that older chips are widely used in electronics, cars, and other ordinary objects. If the US were to craft a restriction so wide that it destroyed China’s entire electronic manufacturing industry, it would surely agitate the Chinese government enough to retaliate in ways that would hurt the US. “If you want to piss somebody off, push them into a corner and give them no way out. Then they’ll come and punch you really hard,” says Woz Ahmed, a UK-based consultant and former chip industry executive. 

Instead, the idea is to inflict pain only in selective areas, like the most advanced technologies that may power China’s supercomputers, artificial intelligence, and advanced weapons. 

[US] policies have a very limited immediate impact on the Chinese domestic chip industry because very few Chinese companies have achieved advanced processes, except HiSilicon,” says He Hui, a research director at consulting firm Omdia who focuses on China’s semiconductor market. “But HiSilicon was already [placed on the blacklist] three years ago.” 

And lower-end, legacy chips are also the subsector where China already has a significant advantage. We are not talking about chips used in powering the artificial intelligence of a self-driving car, but the chips that control a specific part, like airbags. As the technology of the Internet of Things rapidly develops, it still requires many small chips that don’t need to be so advanced. 

“That stuff is still going to be made in China, at least based on the current settings that the Biden administration has conveyed. So that obviously leaves a big incentive and a big market for foreign companies—European, Japanese, and South Korean—to continue working with the Chinese,” says John Lee, the director of East West Futures Consulting who researches the global impacts of China’s tech industries. 

Part of the reason China maintains an advantage here is that in a market of mature, lower-end technologies, price is the most important thing. And China has been historically great at low-cost mass production, thanks to low labor costs and generous industrial subsidies from the government.

A future where China fully dominates in low-end chips has already spooked some Western observers. A report published in Lawfare calls this possibility “a huge supply chain vulnerability.” “The Chinese could just flood the market with these technologies. Normal companies can’t compete, because they can’t make money at those levels,” Dan Hutcheson, an economist at research firm TechInsights, told Reuters.

Other countries, including the United States, will still try to get a slice of the market for legacy chips. The US CHIPS Act that became law last year set aside $2 billion specifically for incentivizing domestic production of these technologies. Experts also say the European Union may introduce its own chip legislation in the next two years. 

But this is an industry that takes an infamously long time to see capital investment turn into actual products. And even as foreign companies like Taiwan-based TSMC announce investment plans for US-based factories, they likely won’t shift more capacity to the US without consistent government support, which is hard to guarantee in America’s polarized and volatile political environment. “I think we still need to wait and see whether [these companies] are willing to keep and carry out their promises,” says He Hui.

Lee calls this dynamic one of the more interesting trends that may come out of the current fight over chip controls. “A lot of this capacity is already in China. Most of the new capacity at these [mature] nodes is being built in China, and there’s a limited capacity [of chipmaking equipment supply], even if the money and the political will is there to develop this in the US and EU,” Lee says. The footprint of China in “supplying the more mundane, high-volume, lower-margin, lower-sophistication, but still indispensable chips,” he adds, “is becoming bigger rather than smaller.”

So looking ahead, we’re left with two key questions: Will China’s legacy chip industry prosper while the country struggles to build the high-end sector? Or will the US government introduce more restrictions to throttle China further? As much as I love predictions, I don’t think we will get definitive answers to these questions in 2023. We should keep them in mind as we watch the semiconductor industry navigate a new era of geopolitical volatility.

What impact will it have if China dominates low-end chip manufacturing? Let me know your thoughts at zeyi@technologyreview.com

Catch up with China

1. Chinese state media used to be the main force engineering rage and patriotic sentiment on social media, but individual pro-government accounts have picked up the baton in recent years. (Nikkei Asia $)

2. Chinese researchers and officials have begun uploading genome sequence data of recent covid cases to a global academic database, showing that sub-variants like XBB that are spreading across the world are also circulating in China. (Financial Times $)

3. Millions of Chinese elders have been left vulnerable to the current wave of covid infections in the country, and many have already died. Here’s the moving story of one mother who didn’t survive in Wuhan. (The Atlantic $)

4. ByteDance employees inappropriately accessed the data of two Western journalists and several other US users in an attempt to stop leaks, the company disclosed in an internal investigation. (New York Times $)

5. Tencent finally won state approval to release three of its most successful international games domestically—including the Pokémon franchise game it co-developed with Nintendo. (Bloomberg $)

6. Hacked emails from a Russian state broadcaster detail how Chinese and Russian state media work together to exchange news and social content. (The Intercept)

7. The European Union offered to ship free covid vaccines to China. China rejected it. (Financial Times $)

8. As hospitals become increasingly strained across China, worried individuals are stocking up on oximeters to monitor blood oxygen levels at home. (Pandaily)

Lost in translation

As Beijing positions itself as a global climate leader, local governments are capitalizing on the business of environmental protection and becoming important players. In the last five years, according to a recent analysis from Chinese think tank Qingshan Research, 17 out of China’s 34 provincial governments have formed state-owned “super companies” that focus on getting government contracts in the environmental sector. While they differ in size and expertise, most of these companies offer services in wastewater treatment, garbage disposal, environmental monitoring, or climate investment management. 

As state-owned companies, they have government endorsement and funding, and they often enjoy preferential treatment in the procurement process. But they also have to compete with private companies and each other. The leaders have been getting contracts that are worth hundreds of millions of dollars per year, while others have struggled to secure enough deals or have ended up on the brink of bankruptcy.  

One more thing

Did you have any difficult conversations about politics with your family last week? You’re not alone. So many young people in China are doing this that when they express their non-mainstream political opinions for the first time, often in front of friends and family, they post about it on social media and call this moment their “政治出柜”—coming out of the political closet. It happened a lot during the protests against zero covid last year, when young people went to the streets or voiced support for the protesters on their WeChat timelines and in their family group chats. For some, it takes as much courage to come out about their nonconformist political beliefs as to come out about sexuality, if not more.

What’s next for the chip industry

The year ahead was already shaping up to be a hard one for semiconductor businesses. Famously defined by cycles of soaring and dwindling demand, the chip industry is expected to see declining growth this year as the demand for consumer electronics plateaus.

But concerns over the economic cycle—and the challenges associated with making ever more advanced chips—could easily be eclipsed by geopolitics.

In recent months, the US has instituted the widest restrictions ever on what chips can be sold to China and who can work for Chinese companies. At the same time, it has targeted the supply side of the chip industry, introducing generous federal subsidies to attract manufacturing back to the US. Other governments in Europe and Asia that are home to major chip companies have introduced similar policies to maintain their own positions in the industry.  

As these changes continue to take effect in 2023, they will throw a new element of uncertainty into an industry that has long relied on globally distributed supply chains and a fair amount of freedom in deciding who they do business with.

What will these new geopolitical machinations mean for the more than $500 billion semiconductor industry? MIT Technology Review asked experts how they think it will all play out in the coming year. Here’s what they said.

The great “reshoring” push

The US committed $52 billion to semiconductor manufacturing and research in 2022 with the CHIPS and Science Act. Of that, $39 billion will be used to subsidize building factories domestically. Companies will be able to officially apply for that funding in February 2023, and the awards will be announced on a rolling basis. 

Some of the funding could be used to help firms with US-based factories manufacture military chips; the US government has long been concerned about the national security risks of sourcing chips from abroad. “Probably more and more manufacturing would be reinstated within the US with the purpose to rebuild the defense supply chain,” says Jason Hsu, a former legislator in Taiwan who is currently researching the intersection of semiconductors and geopolitics as a senior fellow at Harvard’s Kennedy School. Hsu says that defense applications are likely one of the main reasons the Taiwanese chip giant TSMC decided to invest $40 billion in manufacturing five- and three-nanometer chips, currently the two most advanced generations, in the US. 

But “reshoring” commercial chip production is another matter. Most of the chips that go into consumer products and data centers, among other commercial applications, are produced in Asia. Moving that manufacturing to the US would be likely to push up costs and make chips less commercially competitive, even with government subsidies. In April 2022, TSMC founder Morris Chang said that chip manufacturing costs in the US are 50% higher than in Taiwan

“The problem is going to be that Apple, Qualcomm, and Nvidia—they’re going to buy the chips manufactured in the US—are going to have to figure out how to balance those costs, because it’s going to still be cheaper to source those chips in Taiwan,” says Paul Triolo, a senior vice president at the business strategy firm Albright Stonebridge, which advises companies operating in China.

If chip companies can’t figure out how to pay the higher labor costs in the US or keep getting subsidies from the government—which is hard to guarantee—they won’t have an incentive to keep investing in US production in the long term.

And the United States is not the only government that wants to attract more chip factories. Taiwan passed a subsidy act in November to give chip companies large tax breaks. Japan and South Korea are doing the same.

Woz Ahmed, a UK-based consultant and former chip industry executive, expects that subsidies from the European Union will also be moving along in 2023, although he says they likely won’t be finalized until the following year. “It’ll take them a lot longer than it will [take] the US, because of the horse trading amongst all the member states,” he says.

Navigating a newly restricted market

The controls the US introduced in October on the export of advanced chips and technologies represented a major escalation in the stranglehold on China’s chip industry. Rules that once barred selling this advanced tech to a few specific Chinese companies were expanded to apply to virtually all entities in China. There are also novel measures, like restricting the sale of essential chipmaking equipment to China.

The policies put the industry in uncharted enforcement territory. Which chips and manufacturing technologies will be considered “advanced”? If a Chinese company makes both advanced and older-generation chips, can it still source US technologies for the latter? 

The US Department of Commerce answered some questions in a Q&A at the end of October. Among other things, it clarified that less advanced chip production lines can be spared the restrictions if they are in a separate factory building. But it’s still unclear how—and to what extent—the rules will be enforced. 

We’ll see this play out in 2023. Chinese companies will likely look for ways to circumvent the rules. At least one has already tried to make its chips seem less advanced. Non-Chinese companies will also be motivated to find work-arounds—the Chinese market is gigantic and lucrative. 

“If you don’t have enough enforcement people on the ground, or they can’t get the access, as soon as people realize that, lots of people will break the rules,” Ahmed says.

Several experts believe that the US may hit China with yet more restrictions this year. Those rules may take the form of more export controls, a review process for outbound US investments, or other moves targeting chip-adjacent industries like quantum computing. 

Not everyone agrees. Chris Miller, an international history professor at Tufts University, thinks the US administration may take a break and focus on the current restrictions. “I don’t expect major expansion of export controls on chips [in 2023],” says Miller, the author of the new book Chip War: The Fight for the World’s Most Critical Technology. “The Biden administration spent most of the first two years in office working on those restrictions. I think they are hoping that the policy sticks and they don’t have to make changes to it for some time.”

How China will respond

So far, the Chinese government has had little response to the new US export controls except for some diplomatic statements and a legal dispute that it filed with the World Trade Organization, which is unlikely to yield much result. 

Will there be a more dramatic response to come? Most experts say no. China doesn’t seem to have a big enough advantage within the chips sector to significantly hit back at the US with trade restrictions of its own. “The Americans own enough of the core technology that they can [use it] against people who are downstream in the supply chain, like the Chinese. So by definition, that means [China doesn’t] have tools for retaliation,” says John Lee, the director of East West Futures Consulting. 

But the country does control 80% of the world’s refining capacity for rare-earth materials, which are essential in making both military products like parts for fighter jets and everyday consumer device components like batteries and screens. Restricting exports could provide China with some leverage. The Chinese could also choose to sanction a few US companies, whether in the chip industry or not, to send a message.

But so far, China doesn’t seem interested in a scorched-earth path when it comes to semiconductors. “I think the Chinese leaders realized that that approach will be just as costly to China as it would be to the US,” says Miller. The current Chinese chip industry cannot survive without working with the global supply chain—it depends on other companies in other countries for lithography machines, core chip IP, and wafers, so avoiding aggressive retaliation that further poisons the business environment is “probably the smartest strategy for China,” he says. 

Instead of hitting back at the US, China is likely to focus more on propping up the domestic chip industry. It’s been reported that China may announce a trillion yuan ($143 billion) support package for domestic companies as soon as the first quarter of 2023. Offering generous subsidies is a tried and tested method that has helped boost the Chinese semiconductor industry in the last decade. But there remains the question of how to allocate that funding efficiently and to the right companies, especially after the efficiency of China’s flagship government chip investment fund was questioned in 2022 and shaken by high-level corruption investigations

The Taiwan question

The US doesn’t call all the shots. To pull off its chip tech blockade, it must coordinate closely with governments controlling key processes of chipmaking that China can’t replace with domestic alternatives. These include those of the Netherlands, Japan, South Korea, and Taiwan.

That won’t be as easy as it sounds, because despite their ideological differences with China, these places also have an economic interest in maintaining the trade relationship.

The Netherlands and Japan have reportedly agreed to codify some of the US export control rules in their own countries. But the devil is in the fine print. “There are certainly voices supporting the Americans on this,” says Lee, who’s based in Germany. “But there’re also pretty strong voices arguing that to simply follow the Americans and lockstep on this would be bad for European interests.” Peter Wennink, CEO of Dutch lithography equipment company ASML, has said that his company “sacrificed” for the export controls while American companies benefited.

Fissures between countries may grow bigger as time goes on. “The history of these tech restriction coalitions shows that they are complex to manage over time and they require active management to keep them functional,” Miller says.

Taiwan is in an especially awkward position. Because of their geographical proximity and historical relationship, its economy is heavily entangled with that of China. Many Taiwanese chip companies, like TSMC, sell to Chinese companies and build factories there. In October, the US granted TSMC a one-year exemption from the export restrictions, but the exemption may not be renewed when it expires in 2023. There’s also the possibility that a military conflict between Beijing and Taipei would derail all chip manufacturing activities, but most experts don’t see that happening in the near term. 

“So Taiwanese companies must be hedging against the uncertainties,” Hsu says. This doesn’t mean they will pull out from all their operations in China, but they may consider investing more in overseas facilities, like the two chip fabs TSMC plans to build in Arizona. 

As Taiwan’s chip industry drifts closer towards the US and an alliance solidifies around the American export-control regime, the once globalized semiconductor industry comes one step closer to being separated by ideological lines. “Effectively, we will be entering the world of two chips,” Hsu says, with the US and its allies representing one of those worlds and the other comprising China and the various countries in Southeast Asia, the Middle East, Eurasia, and Africa where China is pushing for its technologies to be adopted. Countries that have traditionally relied on China’s financial aid and trade deals with that country will more likely accept the Chinese standards when building their digital infrastructure, Hsu says.

Though it would unfold very slowly, Hsu says this decoupling is beginning to seem inevitable. Governments will need to start making contingency plans for when it happens, he says: “The plan B should be—what’s our China strategy?”

This story is a part of MIT Technology Review’s What’s Next series, where we look across industries, trends, and technologies to give you a first look at the future.

The computer scientist who hunts for costly bugs in crypto code

In the spring of 2022, before some of the most volatile events to hit the crypto world last year, an NFT artist named Micah Johnson set out to hold a new auction of his drawings. Johnson is well known in crypto circles for images featuring his character Aku, a young Black boy who dreams of being an astronaut. Collectors lined up for the new release. On the day of the auction, they spent $34 million on the NFTs.

Then tragedy (or, depending on your point of view, comedy) struck. The “smart contract” code that Johnson’s software team wrote to run the crypto auction contained a critical bug. All $34 million worth of Johnson’s sales was locked on the Ethereum blockchain. Johnson couldn’t withdraw the funds; nor could he refund money to people who’d bid on an NFT but lost their auction. The virtual money was frozen, untouchable—“locked on chain,” as they say. 

Johnson might wish he’d hired Ronghui Gu.

Gu is the cofounder of CertiK, the largest smart-contract auditor in the fizzy and unpredictable world of cryptocurrencies and Web3. An affable and talkative computer science professor at Columbia University, Gu leads a team of more than 250 that pores over crypto code to try to make sure it isn’t filled with bugs. 

CertiK’s work won’t prevent you from losing your money when a cryptocurrency collapses. Nor will it stop a crypto exchange from using your funds inappropriately. But it could help prevent an overlooked software issue from doing irreparable damage. The company’s clients include some of crypto’s biggest players, like the Bored Ape Yacht Club and the Ronin Network, which runs a blockchain used in games. Clients sometimes come to Gu after they’ve lost hundreds of millions—hoping he can make sure it doesn’t happen again.

“This is a real wild world,” Gu says with a laugh.

Crypto code is much more unforgiving than traditional software. Silicon Valley engineers generally try to make their programs as bug-free as possible before they ship, but if a problem or bug is later found, the code can be updated.

That’s not possible with many crypto projects. They run using smart contracts—computer code that governs the transactions. (Say you want to pay an artist 1 ETH for an NFT; a smart contract can be coded to automatically send you the NFT token once the money arrives in the artist’s wallet.) The thing is, once smart-contract code is live on a blockchain, you can’t update it. If you discover a bug, it’s too late: the whole point of blockchains is that you can’t alter stuff that’s been written to them. Worse, code that’s hosted on a blockchain is publicly visible—so black-hat hackers can study it at their leisure and look for mistakes to exploit. 

The sheer number of hacks is dizzying, and they are wildly lucrative. Early last year, the Wormhole network had more than $320 million worth of crypto stolen. Then the Ronin Network lost upwards of $600 million in crypto.

“The most expensive hack in history,” Gu says, shaking his head in near disbelief. “They say Web3 is eating the world—but hackers are eating Web3.”

A bustling field of auditors has emerged in recent years, and Gu’s CertiK is the biggest: the company, which has been valued at $2 billion, figures it has done an estimated 70% of all smart-contract audits. It also runs a system that monitors smart contracts to detect in real time if any are being hacked.

Not bad for someone who stumbled into the field sideways. Gu didn’t start off in crypto; he did his PhD in provable and verifiable software, exploring ways to write code that behaves in a mathematically predictable fashion. But this subject turned out to be highly applicable to the unforgiving world of smart contracts; he cofounded CertiK with his PhD supervisor in 2018. Gu now straddles the worlds of academia and crypto. He still teaches Columbia courses on compilers and the formal verification of system software, and manages several grad students (one of whom is researching compilers for quantum computing)—while also jetting around to Davos and Morgan Stanley events, clad in his habitual black shirt and black jacket as he attempts to convince crypto and financial bigwigs to take blockchain hacks seriously.

Crypto famously runs in boom-bust cycles; the collapse of the FTX exchange in November was just a recent blow. Gu, however, believes he’ll have work to do for years to come. Mainstream firms like banks and, he says, “a major search engine” are beginning to launch their own blockchain products and hiring CertiK to help keep their ships tight. If established businesses start pushing more code onto blockchains, it’ll attract ever more hackers, including nation-state actors. “The threats we have been facing,” he says, “are more and more tough.”