Google’s big week was a flex for the power of big tech

Last week, this space was all about OpenAI’s 12 days of shipmas. This week, the spotlight is on Google, which has been speeding toward the holiday by shipping or announcing its own flurry of products and updates. The combination of stuff here is pretty monumental, not just for a single company, but I think because it speaks to the power of the technology industry—even if it does trigger a personal desire that we could do more to harness that power and put it to more noble uses.

To start, last week Google Introduced Veo, a new video generation model, and Imagen 3, a new version of its image generation model. 

Then on Monday, Google announced a  breakthrough in quantum computing with its Willow chip. The company claims the new machine is capable of a “standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years.” you may recall that MIT Technology Review covered some of the Willow work after researchers posted a paper preprint in August.   But this week marked the big media splash. It was a stunning update that had Silicon Valley abuzz. (Seriously, I have never gotten so many quantum computing pitches as in the past few days.)

Google followed this on Wednesday with even more gifts: a Gemini 2 release, a Project Astra update, and even more news about forthcoming agents called Mariner, an agent that can browse the web, and Jules, a coding assistant.  

First: Gemini 2. It’s impressive, with a lot of performance updates. But I have frankly grown a little inured by language-model performance updates to the point of apathy. Or at least near-apathy. I want to see them do something.

So for me, the cooler update was second on the list: Project Astra, which comes across like an AI from a futuristic movie set. Google first showed a demo of Astra back in May at its developer conference, and it was the talk of the show. But, since demos offer companies chances to show off products at their most polished, it can be hard to tell what’s real and what’s just staged for the audience. Still, when my colleague Will Douglas Heaven recently got to try it out himself, live and unscripted, it largely lived up to the hype. Although he found it glitchy, he noted that those glitches can be easily corrected. He called the experience “stunning” and said it could be generative AI’s killer app.

On top of all this, Will notes that this week Google DeepMind CEO (the company’s AI division) Demis Hassabis was in Sweden to receive his Nobel Prize. And what did you do with your week?”

Making all this even more impressive, the advances represented in Willow, Gemini, Astra, and Veo are ones that just a few years ago many, many people would have said were not possible—or at least not in this timeframe. 

A popular knock on the tech industry is that it has a tendency to over-promise and under-deliver. The phone in your pocket gives the lie to this. So too do the rides I took in Waymo’s self-driving cars this week. (Both of which arrived faster than Uber’s estimated wait time. And honestly it’s not been that long since the mere ability to summon an Uber was cool!) And while quantum has a long way to go, the Willow announcement seems like an exceptional advance; if not a tipping point exactly, then at least a real waypoint on a long road. (For what it’s worth, I’m still not totally sold on chatbots. They do offer novel ways of interacting with computers, and have revolutionized information retrieval. But whether they are beneficial for humanity—especially given energy debts, the use of copyrighted material in their training data, their perhaps insurmountable tendency to hallucinate, etc.—is debatable, and certainly is being debated. But I’m pretty floored by this week’s announcements from Google, as well as OpenAI—full stop.)

And for all the necessary and overdue talk about reining in the power of Big Tech, the ability to hit significant new milestones on so many different fronts all at once is something that only a company with the resources of a Google (or Apple or Microsoft or Amazon or Meta or Baidu or whichever other behemoth) can do. 

All this said, I don’t want us to buy more gadgets or spend more time looking at our screens. I don’t want us to become more isolated physically, socializing with others only via our electronic devices. I don’t want us to fill the air with carbon or our soil with e-waste. I do not think these things should be the price we pay to drive progress forward. It’s indisputable that humanity would be better served if more of the tech industry was focused on ending poverty and hunger and disease and war.

Yet every once in a while, in the ever-rising tide of hype and nonsense that pumps out of Silicon Valley, epitomized by the AI gold rush of the past couple of years, there are moments that make me sit back in awe and amazement at what people can achieve, and in which I become hopeful about our ability to actually solve our larger problems—if only because we can solve so many other dumber, but incredibly complicated ones. This week was one of those times for me. 


Now read the rest of The Debrief

The News

• Robotaxi adoption is hitting a tipping point. 

• But also, GM is shutting down its Cruise robotaxi division.

• Here’s how to use OpenAI’s new video editing tool Sora.

• Bluesky has an impersonator problem.

• The AI hype machine is coming under government scrutiny.


The Chat

Every week, I talk to one of MIT Technology Review’s journalists to go behind the scenes of a story they are working on. This week, I hit up James O’Donnell, who covers AI and hardware, about his story on how the startup defense contractor Anduril is bringing AI to the battlefield.

Mat: James, you got a pretty up close look at something most people probably haven’t even thought about yet, which is how the future of AI-assisted warfare might look. What did you learn on that trip that you think will surprise people?

James: Two things stand out. One, I think people would be surprised by the gulf between how technology has developed for the last 15 years for consumers versus the military. For consumers, we’ve gotten phones, computers, smart TVs and other technologies that generally do a pretty good job of talking to each other and sharing our data, even though they’re made by dozens of different manufacturers. It’s called the “internet of things.” In the military, technology has developed in exactly the opposite way, and it’s putting them in a crisis. They have stealth aircraft all over the world, but communicating about a drone threat might be done with Powerpoints and a chat service reminiscent of AOL Instant Messenger.

The second is just how much the Pentagon is now looking to AI to change all of this. New initiatives have surged in the current AI boom. They are spending on training new AI models to better detect threats, autonomous fighter jets, and intelligence platforms that use AI to find pertinent information. What I saw at Anduril’s test site in California is also a key piece of that. Using AI to connect to and control lots of different pieces of hardware, like drones and cameras and submarines, from a single platform. The amount being invested in AI is much smaller than for aircraft carriers and jets, but it’s growing.

Mat: I was talking with a different startup defense contractor recently, who was talking to me about the difficulty of getting all these increasingly autonomous devices on the battlefield talking to each other in a coordinated way. Like Anduril, he was making the case that this has to be done at the edge, and that there is too much happening for human decision making to process. Do you think that’s true?  Why is that?

James: So many in the defense space have pointed to the war in Ukraine as a sign that warfare is changing. Drones are cheaper and more capable than they ever were in the wars in the Middle East. It’s why the Pentagon is spending $1 billion on the Replicator initiative to field thousands of cheap drones by 2025. It’s also looking to field more underwater drones as it plans for scenarios in which China may invade Taiwan.

Once you get these systems, though, the problem is having all the devices communicate with one another securely. You need to play Air Traffic Control at the same time that you’re pulling in satellite imagery and intelligence information, all in environments where communication links are vulnerable to attacks.

Mat: I guess I still have a mental image of a control room somewhere, like you might see in Dr. Strangelove or War Games (or Star Wars for that matter) with a handful of humans directing things. Are those days over?

James: I think a couple things will change. One, a single person in that control room will be responsible for a lot more than they are now. Rather than running just one camera or drone system manually, they’ll command software that does it for them, for lots of different devices. The idea that the defense tech sector is pushing is to take them out of the mundane tasks—rotating a camera around to look for threats—and instead put them in the driver’s seat for decisions that only humans, not machines, can make.

Mat: I know that critics of the industry push back on the idea of AI being empowered to make battlefield decisions, particularly when it comes to life and death, but it seems to me that we are increasingly creeping toward that and it seems perhaps inevitable. What’s your sense?

James: This is painting with broad strokes, but I think the debates about military AI fall along similar lines to what we see for autonomous vehicles. You have proponents saying that driving is not a thing humans are particularly good at, and when they make mistakes, it takes lives. Others might agree conceptually, but debate at what point it’s appropriate to fully adopt fallible self-driving technology in the real world. How much better does it have to be than humans?

In the military, the stakes are higher. There’s no question that AI is increasingly being used to sort through and surface information to decision-makers. It’s finding patterns in data, translating information, and identifying possible threats. Proponents are outspoken that that will make warfare more precise and reduce casualties. What critics are concerned about is how far across that decision-making pipeline AI is going, and how much there is human oversight.

I think where it leaves me is wanting transparency. When AI systems make mistakes, just like when human military commanders make mistakes, I think we deserve to know, and that transparency does not have to compromise national security. It took years for reporter Azmat Khan to piece together the mistakes made during drone strikes in the Middle East, because agencies were not forthcoming. That obfuscation absolutely cannot be the norm as we enter the age of military AI.

Mat: Finally, did you have a chance to hit an In-N-Out burger while you were in California?

James: Normally In-N-Out is a requisite stop for me in California, but ahead of my trip I heard lots of good things about the burgers at The Apple Pan in West LA, so I went there. To be honest, the fries were better, but for the burger I have to hand it to In-N-Out.


The Recommendation

A few weeks ago I suggested Ca7riel and Paco  Amoroso’s appearance on NPR Tiny Desk. At the risk of this space becoming a Tiny Desk stan account, I’m back again with another. I was completely floored by Doechii’s Tiny Desk appearance last week. It’s so full of talent and joy and style and power. I came away completely inspired and have basically had her music on repeat in Spotify ever since. If you are already a fan of her recorded music, you will love her live. If she’s new to you, well, you’re welcome. Go check it out. Oh, and don’t worry: I’m not planning to recommend Billie Eilish’s new Tiny Desk concert in next week’s newsletter. Mostly because I’m doing so now.

OpenAI’s “12 days of shipmas” tell us a lot about the AI arms race

This week, OpenAI announced what it calls the 12 days of OpenAI, or 12 days of shipmas. On December 4, CEO Sam Altman took to X to announce that the company would be “doing 12 days of openai. each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers.”

The company will livestream about new products every morning for 12 business days in a row during December. It’s an impressive-sounding (and media-savvy) schedule, to be sure. But it also speaks to how tight the race between the AI bigs has become, and also how much OpenAI is scrambling to build more revenue.

While it remains to be seen whether or not they’ve got AGI in a pear tree up their sleeve, and maybe putting aside whether or not Sam Altman is your true love, the man can ship. OpenAI has been a monster when it comes to actually getting new products out the door and into the hands of users. It’s hard for me to believe that it was just two years ago, almost exactly, that it released ChatGPT. That was a world-changing release, but was also just one of many. The company has been on an absolute tear:  Since 2022, it’s shipped DALL-E 2, DALL-E 3, GPT-4, ChatGPT Plus, a realtime API, GPT-4o, an advanced voice mode, a preview version of a new model called o1, and a web search engine. And that’s just a partial list.

When it kicked off its 12-days shenanigans on Thursday, it was with an official roll out of OpenAI o1 and a new, $200-per-month service called ChatGPT Pro. Friday morning, it followed that up with an announcement about a new model customization technique.

If the point you have taken away from all this is that OpenAI is very, very bad at naming things, you would be right. But! There’s another point to be made, which is that the stuff it is shipping is not coming out in a vacuum anymore, as it was two years ago. When DALL-E 2 shipped, OpenAI seemed a little like the only game in town. That was still mostly true when ChatGPT came out a few months later. But those releases sent Google into full-on freakout mode, issuing a “code red” to catch up. And then it was off to the races.

Now, there is a full-scale sprint happening between OpenAI, Google (which released its Gemini models to the public almost exactly a year ago), Anthropic (which was founded by a bunch of OpenAI formers), Meta, and, to some extent, Microsoft (OpenAI’s partner).

To wit: A little over a month ago, Anthropic unveiled a bananas demo of its chatbot Claude’s ability to use a computer. On Thursday (aka: the first day of shipmas), Microsoft announced a version of CoPilot that can follow along with you while you browse the web using AI vision. And ahead of what is widely predicted to be OpenAI’s biggest release of shipmas, its new video generation tool Sora, Google jumped ahead with its own generative video product, Veo (although it has not released it widely to the public yet).

Oh. There was also one other announcement from OpenAI, just ahead of shipmas, that seems relevant. On Wednesday, it announced a new partnership with defense contractor Anduril. Some of you may remember that OpenAI is the company that had once pledged not to let its technology be used for weapons development or the military. As James O’Donnell points out, “OpenAI’s policies banning military use of its technology unraveled in less than a year.”

This is notable in its own right, but also in crystallizing just how much OpenAI needs cold hard cash. See also: the new $200-per-month ChatGPT Pro tier. (And while recurring revenue from users will bring in some much-needed cash flow, there is a fortune in defense spending.) In addition, the company is looking into bringing paid advertisements to its services, according to its CFO Sarah Friar in an interview with the FT way back in … (checks watch) … Monday.

As has been oft-discussed, OpenAI is just incinerating piles of money. It’s on track to lose billions and billions of dollars for several more years. It has to start bringing in more revenue, lots more. And to do that it has to stay ahead of its rivals. And to do that, it has to get new, compelling products to market that are better in some way than what its competitors offer. Which means it has to ship. And monetize. And ship. And monetize. Because Google and Anthropic and Meta and a host of others are all going to keep coming out with new products, and new services too.

The arms race is on. And while the 12 days of shipmas may seem jolly, internally I bet it feels a lot more like Santa’s workshop on December 23. Pressure’s on. Time to deliver.

If someone forwarded you this edition of The Debrief, you can subscribe here. I appreciate your feedback on this newsletter. Drop me a line at mat.honan@technologyreview.com with any and all thoughts. And of course, I love tips.


Now read the rest of The Debrief

The News

• Bitcoin breaks $100,000 after Trump announces Paul Atkins as SEC pick. 

• China’s critical mineral ban is an opening salvo, not a kill shot. This is what it means for the US.

• OpenAI announced a deal with defense contractor Anduril. It’s a huge pivot. 

• In an effort to combat sophisticated disinformation campaigns, the US Department of Defense is investing in deepfake detection. 

• President-elect Trump names PayPal Mafia member, All-in Podcast host, and former Yammer CEO David Sacks as White House AI and crypto Czar. 

• An appeals court upheld the US’ TikTok ban. It’s likely going to the Supreme Court.


The Chat

Every week, I talk to one of MIT Technology Review’s journalists to go behind the scenes of a story they are working on. This week, I hit up Amanda Silverman, our features and investigations editor, about our big story on the way the war in Ukraine is reshaping the tech sector in eastern Europe.

Mat: Amanda, we published a story this week from Peter Guest that’s about the ways civilian tech is being repurposed for the war in Ukraine. I could be wrong, but ultimately I think it showed how warfare has truly changed thanks to inexpensive, easily-built tech products. Is that right?

Amanda: I think that’s pretty spot on. Though maybe it’s more accurate to say, less expensive, more-easily-built tech products. It’s all relative, right? Like, the retrofitted consumer drones that have been so prevalent in Ukraine over the past few years are vastly cheaper than traditional weapons systems, and what we’re seeing now is that lots of other tech that was initially developed for civilian purposes—like, Pete reported on a type of scooter—are being sent to the front. And again, these are much, much cheaper than traditional weaponry. And they can be developed and shipped out really quickly.

The other thing Pete found was that this tech is being quickly reworked to respond to battlefield feedback—like that scooter has been customized to carry NATO standard-sized bullet boxes. I can’t imagine that happening in the old way of doing things.

Mat: It’s move fast and (hope not to) break things, but for war…. There is also this other, much scarier idea in there, which is that the war is changing, maybe has changed, Eastern Europe’s tech sector. What did Pete find is happening there?

Amanda: So a lot of the countries neighboring Ukraine are understandably pretty freaked out by what happened there and how the country had to turn on a dime to respond to the full-scale invasion by Russia. At the same time, Pete found that a lot of people in these countries, particularly in Latvia and particularly leading tech startups, have been inspired by how Ukrainians mobilized for the war and they’re trying to sort of get ahead of the potential enemy and get ready for a conflict within their borders. It’s not all scary, to be clear. It’s arguably somewhat thrilling to see all this innovation happening so quickly and to have some of the more burdensome red tape removed.

Mat: Okay so Russia’s neighbors are freaked out, as you say, understandably. Did anything about this story freak you out?

Amanda: Yeah, it’s impossible to ignore that there is a huge, scary risk here, too: as these companies develop new tech for war, they have an unprecedented opportunity to test it out in Ukraine without going through the traditional development and procurement process—which can be slow and laborious, sure, but also includes a lot of important testing, checks and balances, and more to prevent fraud and lots of other abuses and dangers. Like, Pete nods to how Clearview AI was deploying its tech to identify Russian war dead, which is scary in and of itself and also may violate the Geneva Conventions.

Mat: And then I’m curious, what do you look for when you are assigning a story like this? What caught your attention?

Amanda: I felt like I’d read quite a bit about the total mobilization of Ukrainian society (including a story from Pete inWired). But I had sort of thought about all this activity as happening in a bit of a vacuum. Or at least in a limited sense, within Ukrainian borders. Of course, the US and our European allies are sending loads of money and loads of weapons but (at least as I understand it) they’re largely weapons we already have in our arsenals. So when Pete pitched us this story about how the war was reshaping the tech sector of Ukraine’s neighbors, particularly civilian tech, I was really intrigued.


The Recommendation

Several weeks ago, we had our e-bike stolen. Some guy with an angle iron cut the lock. And as it turned out, our insurance didn’t cover the loss because the bike (like almost all e-bikes) had a top speed above 15 mph. As I came to learn, this is not uncommon. But you know what is common? E-bike theft. The police told us there is little chance of recovering our bike—in large part because we did not have a tracker attached to it. It was an all-around frustrating experience.  We replaced the bike, and this time I’ve invested in one of these Elevation Labs waterproof mounts to affix an AirTag to the frame, hidden away below the seat. They have a whole line of mounts, a few of which are bike-specific. Much cheaper than a new bike. They make a good stocking stuffer.