What if there was a simple formula ecommerce marketers could follow to improve conversions from digital ad campaigns? It turns out there is.
MECLABS Institute is a research firm devoted to understanding how folks make decisions, such as for purchases. Several years ago, the institute bundled its data and produced a formula for a high-converting landing page.
C = 4M + 3V + 2(I-F) + 2A
Whereas:
C = conversion.
M = motivation.
V = value.
I = incentives.
F = reduced friction.
A = addressing anxiety.
The idea is that if it’s going to convert well, a landing page should mention:
Shoppers’ motivation four times,
Value to shoppers three times,
Incentives to buy twice,
Shoppers’ anxiety twice.
Testing the Formula
Ecommerce landing pages are often different than, say, those of information providers. An ecommerce ad has a clear goal: sell a product. Thus many such ads point to an existing product or category page.
But those pages are not necessarily a good way to convert an ad-driven visitor.
As a test, consider building a campaign-specific landing page using the MECLABS formula. The page doesn’t need to be an advertorial or anything exotic. Simply apply the formula.
Linking ads (at left) to category pages may not produce strong conversions.
Motivation
I learned about the MECLABS formula from Chris Misterek, a user-experience designer at Showit, a creative agency. Misterek was giving a presentation at a conference on what motivates shoppers to buy a t-shirt or anything — the M component of C = 4M + 3V + 2(I-F) + 2A.
A business that knows its customers’ motivations can address them in landing page copy and position the product’s value accordingly.
Businesses that have not yet identified why customers buy should do a bit of research. If nothing else, ask an AI to summarize each of Maslow’s Hierarchy of Needs, Self-Determination Theory, and the Theory of Needs.
Theory of Needs
In his presentation, Misterek zeroed in on the late Professor David McClelland’s Theory of Needs.
McClelland argued that everyone needs a sense of achievement, affiliation, and power.
Achievement. Folks with a high need for achievement want to excel in relation to a set of standards.
Affiliation. These folks want to feel like they belong to a community wherein they can build friendly and, sometimes, close interpersonal relationships.
Power. These individuals want to influence, teach, or encourage others.
For a landing page, motivation can be positive or negative. For example, the need for affiliation can be a positive motive (“be part of the community”) or negative (“don’t miss your chance to connect”).
Imagine you run marketing for an online shop selling science-fiction-themed t-shirts. You might develop a YouTube ad and landing page based on customers’ need for affiliation.
The ad shows a male standing alone at a party. He looks a little nerdy and doesn’t seem to fit in, but he is wearing a sci-fi t-shirt. He spots an equally nerdy female from across the room wearing a similar-themed top — instant connection and affiliation.
Focus the ad and landing page on shoppers’ motives to buy.
The landing page features an image of the male and female from the commercial and employs the formula C = 4M + 3V + 2(I-F) + 2A.
The headlines and subheads tell a story in various ways. Wearing a science-fiction t-shirt is a conversation starter, allowing sci-fi fans to connect with like-minded individuals, thus fulfilling their need for social interaction and friendship.
The body copy could state, “When you wear our t-shirts, you join a community. Our designs follow iconic sci-fi themes that resonate with fans worldwide. Share your passion, make new friends, and feel the camaraderie. Follow us on social media to connect with others, participate in discussions, and display your latest purchases.”
Google’s search results are highly personalized. Varying levels of personalization exist for each query based on:
Search history,
Interaction (clicks) history,
Search Console properties of searcher,
Location,
Browser settings (e.g., language).
Thus what a merchant sees in search engine result pages, for example, could differ from shoppers. Here are three ways to depersonalize the results to understand what others may encounter on important keywords.
Chrome Incognito Mode
The Incognito mode prevents Chrome from saving browsing history, cookies, site data, and form-completion info.
Searching Google using Incognito will remove the three top personalization types — search history, clicks, and Search Console properties — but not necessarily location and browser settings.
To access Incognito, go to “File” > “New Incognito Window.”
From there, use third-party SERP extractor tools to copy to a local file. For example, Serpsonar exports SERPs in an Excel file to include ranking URLs, titles, and descriptions. SEO Minion, a Chrome app, offers a one-click copy of the entire SERP in view.
To access Incognito in Chrome, go to “File”> “New Incognito Window.”
Keyword Insights
Keyword Insights freemium SERP Explorer can extract up to 100 results per query. The tool will extract all components of the SERP, not just organic results.
It also provides country- or city-specific SERPs without a needing a VPN service or proxies.
To start a new SERP Explorer project:
Type a keyword,
Choose the target location and language,
Select the type of search results: web, image, news, shopping, or video,
Select device: desktop, mobile, or tablet.
Once extracted, the search results are easily copied and retained locally.
SERP Explorer offers five free daily searches with exports limited to the top 10 desktop results. To access all features, upgrade to the professional tier at $145 per month.
To start a new SERP Explorer project, type the keyword (“buy laptop”), location, language, and more. Click image to enlarge.
Thruuu
Thruuu is an AI-powered optimization tool for keyword discovery, SERP analysis, and content creation. The tool extracts organic results and additional SERP components such as local packs, videos, and “People also ask” sections.
Thruuu extracts up to 100 results on mobile and desktop devices and provides additional data for each listing, such as the number of words, number of images, publication date, on-page schema types, page type, and page rank (via the Open PageRank initiative).
Users can download the SERP and related data as an Excel file.
Thruuu offers four free credits to analyze and compare SERPs. Paid plans start at $19 per month.
Thruuu is a tool for keyword discovery, SERP analysis, and content creation. This example is for the keyword “buy a laptop.” Click image to enlarge.
OpenAI acquired a technology from Rockset that will enable the creation of new products, real-time data analysis, and recommendation systems, possibly signaling a new phase for OpenAI that could change the face of search marketing in the near future.
What Is Rockset And Why It’s Important
Rockset describes its technology as a Hybrid Search, a type of multi-faceted approach to search (integrating vector search, text search and metadata filtering) to retrieve documents that can augment the generation process in RAG systems. RAG is a technique that combines search with generative AI that is intended to create more factually accurate and contextually relevant results. It’s a technology that plays a role in BING’s AI search and Google’s AI Overviews.
Rockset’s research paper about the Rockset Hybrid Search Architecture notes:
“All vector search is becoming hybrid search as it drives the most relevant, real-time application experiences. Hybrid search involves incorporating vector search and text search as well as metadata filtering, all in a single query. Hybrid search is used in search, recommendations and retrieval augmented generation (RAG) applications.
…Rockset is designed and optimized to ingest data in real time, index different data types and run retrieval and ranking algorithms.”
What makes Rockset’s hybrid search important is that it allows the indexing and use of multiple data types (vectors, text, geospatial data about objects & events), including real-time data use. That powerful flexibility allows the technology to interact with different kinds of data that can be used for in-house and consumer-facing applications related to contextually relevant product recommendations, customer segmentation and analysis for targeted marketing campaigns, personalization, personalized content aggregation, location-based recommendations (restaurants, services, etc.) and in applications that increase user engagement (Rockset lists numerous case studies of how their technology is used).
“AI has the opportunity to transform how people and organizations leverage their own data. That’s why we’ve acquired Rockset, a leading real-time analytics database that provides world-class data indexing and querying capabilities.
Rockset enables users, developers, and enterprises to better leverage their own data and access real-time information as they use AI products and build more intelligent applications.
…Rockset’s infrastructure empowers companies to transform their data into actionable intelligence. We’re excited to bring these benefits to our customers…”
OpenAI’s announcement also explains that they intend to integrate Rockset’s technology into their own retrieval infrastructure.
At this point we know the transformative quality of hybrid search and the possibilities but OpenAI is at this point only offering general ideas of how this will translate into APIs and products that companies and individuals can create and use.
The official announcement of the acquisition from Rockset, penned by one of the cofounders, offered these clues:
“We are thrilled to join the OpenAI team and bring our technology and expertise to building safe and beneficial AGI.
…Advanced retrieval infrastructure like Rockset will make AI apps more powerful and useful. With this acquisition, what we’ve developed over the years will help make AI accessible to all in a safe and beneficial way.
Rockset will become part of OpenAI and power the retrieval infrastructure backing OpenAI’s product suite. We’ll be helping OpenAI solve the hard database problems that AI apps face at massive scale.”
What Exactly Does The Acquisition Mean?
Duane Forrester, formerly of Bing Search and Yext (LinkedIn profile), shared his thoughts:
“Sam Altman has stated openly a couple times that they’re not chasing Google. I get the impression he’s not really keen on being seen as a search engine. More like they want to redefine the meaning of the phrase “search engine”. Reinvent the category and outpace Google that way. And Rockset could be a useful piece in that approach.
Add in Apple is about to make “ChatGPT” a mainstream thing with consumers when they launch the updated Siri this Fall, and we could very easily see query starts migrate away from traditional search engine boxes. Started with TikTok/social, now moving to ai-assistants.”
Another approach, which could impact SEO, is that OpenAI could create a product based on an API that can be used by companies to power in-house and consumer facing applications. With that approach, OpenAI provides the infrastructure (like they currently do with ChatGPT and foundation models) and let the world innovate all over the place with OpenAI at the center (as it currently does) as the infrastructure.
I asked Duane about that scenario and he agreed but also remained open to an even wider range of possibilities:
“Absolutely, a definite possibility. As I’ve been approaching this topic, I’ve had to go up a level. Or conceptually switch my thinking. Search is, at its heart, information retrieval. So if I go down the IR path, how could one reinvent “search” with today’s systems and structures that redefine how information retrieval happens?
This is also – it should be noted- a description for the next-gen advanced site search. They could literally take over site search across a wide range of mid-to-enterprise level companies. It’s easily as advanced as the currently most advanced site-search systems. Likely more advanced if they launch it. So ultimately, this could herald a change to consumer search (IR) and site-search-based systems.
Expanding from that, apps, as they allude to. So I can see their direction here.”
Deedy Das of Menlo Ventures (Poshmark, Roku, Uber) speculated on Twitter about how this acquisition may transform OpenAI:
“This is speculation but I imagine Rockset will power all their enterprise search offerings to compete with Glean and / or a consumer search offering to compete with Perplexity / Google. Permissioning capabilities of Rockset make me think more the former than latter”
Others on Twitter offered their take on how this will affect the future of AI:
“I doubt OpenAI will jump into the enterprise search fray. It’s just far too challenging and something that Microsoft and Google are best positioned to go after.
This is a play to accelerate agentic behaviors and make deep experts within the enterprise. You might argue it’s the same thing an enterprise search but taking an agent first approach is much more inline with the OpenAI mission.”
A Consequential Development For OpenAI And Beyond
The acquisition of Rockset may prove to be the foundation of one of the most consequential changes to how businesses use and deploy AI, which in turn, like many other technological developments, could also have an effect on the business of digital marketing.
Read how Rockset customers power recommendation systems, real-time personalization, real-time analytics, and other applications:
Wix announced a new Figma Plugin that enables designers to import Figma designs directly into Wix Studio and dramatically speed up site creation from the design stage to a functioning website.
Figma Design Tool
Figma is a SaaS (software as a service) collaborative design tool that allows designers, teams and clients to prototype designs, in this case website designs.
Figma enables designers, developers and other stakeholders to create, share, and test digital designs for websites (and other digital experiences). Its collaborative features streamline the back and forth of design, feedback and decision-making.
The Wix Figma Plugin allows a designer or design agency to export those designs into Wix Studio where it can then be realized into an actual functional website.
Wix Figma Plugin
Gali Erez, Head of Product at Wix Studio Editor offered the following statement:
“We are thrilled to present the new plugin to the design community… With its innovative features and intuitive interface the plugin empowers users to craft captivating designs, and swiftly streamline the path from design to production. This efficiency enhances their design and development experience and ultimately drives conversions.”
Wix is a SaaS website builder platform that makes it easy for anyone to create attractive high performance websites for virtually any industry, from ecommerce to online services. The announcement of Figma plugin is yet another innovation that makes Wix one of the most attractive platforms for creating websites.
Figma last month updated users about the imminent rollout of the Wix plugin, which generated a lot of excitement.
Typical comments:
“Awaiting its public release, very excited.”
“So excited for this plugin to come out. I am glad it’s coming out next month it’ll change the game in web design.”
Let’s talk about PPC bidding strategies. If you’ve ever felt like you’re throwing darts in the dark when it comes to picking the right one, you’re not alone.
When I first started in Google Ads, the only bidding strategy available was “Max CPC” bidding, meaning everything was manual.
These strategies aren’t exactly a “one size fits all” deal for your campaigns.
Not only are there more choices than ever to reach your goals, but the inputs you set at the campaign level are just as crucial for success.
The truth is that choosing the right bid strategy can be the difference between crushing your PPC goals or watching your budget go up in flames.
Let’s dive into the nitty-gritty of AI-powered bid strategies, or Smart Bidding strategies, and figure out how to maximize performance for each of your campaigns.
How Many PPC Bid Strategies Does Google Ads Have?
Google Ads offers multiple types of bidding strategies aimed at meeting the goals of all available campaign types.
These strategies use Google AI to optimize in every single auction, typically known as “real-time bidding.”
It takes many factors into consideration at the time of auction outside of your bidding strategy, including device, location, time of day, operating system, and many more.
Google categorizes their Smart Bidding strategies into three main goals:
Conversions.
Clicks.
Viewability.
It’s important to match your Google Ads bid strategies with the campaign’s specific advertising objectives.
If you’re not sure where to start with goals, consider these points when making a bid strategy decision:
Are you looking for users to take direct action on your website?
Are you looking to increase video engagement and interaction?
Are you focused on product or brand consideration when users are actively shopping around?
Conversion-Based Bid Strategies
Currently, Google Ads offers these Smart Bidding strategies aimed at increasing conversions:
Target Cost per Action (CPA).
Target Return on Ad Spend (ROAS).
Maximize Conversions.
Maximize Conversion Value.
Enhanced Cost per Click (eCPC).
Click-Based Bid Strategies
If your main goal is gaining website traction, the only automated bid strategy currently available is Maximize Clicks.
Manual CPC bidding is still an option, but we’ll get to that later on in the article.
Visibility-Based Bid Strategies
Not all campaigns aim to capture the final conversion, and that’s ok!
You need to have some element of brand awareness coming in, otherwise the group of people who know about your product will continue to shrink.
If your campaign goals are focused on awareness, consider these automated PPC bidding strategies:
Target Impression Share.
CPM.
tCPM.
vCPM.
Next, we’ll examine the main AI-powered PPC bidding strategy more granularly to get a better understanding of each one, as well as when it makes sense to choose that particular bid strategy.
Target CPA lets you set the amount you’re willing to pay for a conversion. Google Ads uses machine learning to get as many conversions as possible at or below your set CPA.
Google then takes your Target CPA to set bids based on the likelihood of conversion from that particular user.
While some conversions may cost more than your Target CPA, others may cost less than your target, but overall, the Google Ads system tries to keep your cost per conversion at the level you set.
There are multiple use cases for choosing Target CPA bidding:
Historical conversion data is available. This bid strategy requires historical conversion data, so if you have ample campaign or account conversion data, this could be a good strategy for you.
You need better budget control. It’s also good if you need to retain control over your CPA in order to manage the overall ROI of your PPC program.
Conversion tracking is accurately set up. As long as there are no underlying issues with your conversion tracking, this bid strategy could be reliable for your campaigns.
For example, say you run an online boutique clothing website and know that acquiring a new customer at $50 will still be profitable. For your campaign, you choose the Target CPA bid strategy and set the limit to $50.
As you’re running your campaigns, the data shows you’ve consistently been acquiring new customers at $40. Because of this, the Google Ads system knows it can optimize bids further to get you more customers while still staying within that $50 limit.
Now, there are some limitations to Target CPA bidding to be aware of:
Limited budgets could reduce visibility. If you’ve set a competitive Target CPA, Google may limit your ad exposure or participation in the auction and reserve your budget for more expensive or competitive auctions. Essentially, you may see impressions and clicks decline as the system “conserves” budget expenditure for the most likely-to-purchase candidates.
Misalignment of daily budget and Target CPA can reduce results. Say you have a daily budget of $50 for your campaign, but your Target CPA is set at $25. Your impressions may be vastly reduced because, in this scenario, you’d need to have a stellar conversion rate for the number of clicks you get in order to stay within that $25 CPA.
Target Return On Ad Spend (ROAS) Bidding
Target ROAS aims to achieve a specific return on ad spend. You set the desired ROAS, and Google Ads optimizes bids to maximize conversion value while hitting your target.
Similar to Target CPA, Google then takes your ROAS inputs to set bids based on the likelihood of a conversion from that particular user.
Some good use cases for using Target ROAS bidding for campaigns include:
Your goals are revenue-driven. Target ROAS is great for ecommerce businesses where goals are revenue-based.
You have high-value transactions. This PPC bidding strategy can be especially effective for high-revenue transactions or a high volume of conversions.
Proper conversion tracking is set up. Similar to Target CPA bidding, this strategy requires accurate conversion tracking. As long as tracking is accurate and validated, this can be a good choice for your campaigns.
The Target ROAS bid strategy is a great choice when you need to balance the cost of your PPC campaigns versus the revenue coming in.
Ultimately, it helps generate more revenue for every dollar spent.
For example, you have an online store that sells running shoes. Your average order value is $150, and you aim to have a 300% ROAS.
That means for every $1 you spend, you get back $3 in revenue. By setting a Target ROAS, Google optimizes campaign bids to focus on the specific conversions that will likely meet or exceed that 300% ROAS goal.
As your campaigns gain more historical sales data, you’ll notice that more of your dollars are going to those higher revenue-generating sales because of the goal setting.
With Target ROAS settings, remember that if you have an overall goal of 300% ROAS, that doesn’t mean every campaign you set should have that 300% goal.
When it comes to search campaigns, brand terms and non-brand terms are not created equal. Brand terms will likely have the highest ROAS because someone is actively searching for your brand, signaling a higher likelihood of purchase.
Non-brand terms, on the other hand, will be more competitive and costly, and likely won’t have the same ROAS as brand terms. So, be sure to set your ROAS goals at the campaign level accordingly.
Maximize Conversions Bidding
Maximize Conversions automatically sets bids to help you get the most conversions within your budget.
This strategy aims at spending your entire campaign budget without having any ROAS or CPA limitations.
Maximize Conversions can be an ideal bid strategy if:
You have more budget flexibility. As mentioned above, this strategy is not constrained by CPA or ROAS targets. If you’re looking to drive as many conversions as possible and have the budget to do so, this strategy is right for you.
You’re looking for quick wins. If you just launched a new product and conversion expectations are high, this is an ideal strategy.
A broader audience is targeted. This strategy can be effective with a broader audience because there’s more of a likelihood for your ads to show as the system learns what a valuable customer looks like.
For example, your company just launched a new fitness app and needs to acquire users quickly.
By having a flexible budget, Maximize Conversions is chosen to drive as many downloads and signups as possible. Google will automatically adjust those bids to find the users most likely to convert.
This bid strategy is not for the faint of heart, especially for advertisers who have limited budgets or need to stay within certain performance constraints.
Maximize Conversion Value Bidding
Similar to Maximize Conversions, the Maximize Conversion Value strategy sets bids to help you get the most conversion value within your budget.
This strategy aims to optimize for conversion value while spending your entire campaign budget without having any ROAS or CPA limitations.
Maximize Conversions can be an ideal bid strategy if:
Conversion value is prioritized over volume. This bid strategy is suitable when the goal is to prioritize high-value conversions instead of the volume of conversions.
Campaigns are revenue-focused. Maximize Conversion Value is great when maximizing revenue is important.
Your products have multiple price points. This is an effective bid strategy when you have different products that vary in price. The system will learn to focus on the high-value transactions from users.
For example, you run an online wedding invitations company with higher price points. Your site also sells accessories that cost much less than the invitations.
Using the Maximize Conversion Value bidding strategy helps focus on those high-value transactions, like wedding invitations, to boost your revenue while spending your campaign budget.
As with each bidding strategy, there are some limitations to using the Maximize Conversions (and Value) strategies:
Performance is dependent on campaign budget. If the campaign budget is set too low, it will be difficult for Google Ads to effectively learn and optimize towards high-value conversions.
Less control over specific types of conversions. If you’re measuring multiple conversion types that have values associated, this strategy doesn’t allow you to target the specific conversion types. Its aim is to look at the overall conversion value.
This could lead to inefficiencies in performance metrics. While you may see an increase in revenue, you could also yield higher Cost per Acquisition, especially during more competitive markets.
At the end of the day, it’s up to you to decide if you have enough budget flexibility to utilize Maximize Conversions (or Value) or need to stay within certain ROAS or CPA constraints.
The Maximize Clicks bid strategy aims to get as many clicks as possible within your budget.
What’s nice about this strategy is that you can add “ceiling” bid limits for Google to not go over within the auction process.
Maximize Clicks is ideal for your campaigns if:
You’re looking to increase website traffic. If you’re less focused on conversion and looking to get as much traffic as possible, this strategy is for you.
You’re running Top-of-Funnel (TOF) or Middle-of-Funnel (MOF) campaigns. Similar to the above, if your campaign goal is more about awareness generation and buyer consideration, Maximize Clicks is a great place to start.
You’re setting up new campaigns with no history. Because many of the conversion-based bid strategies require historical data, setting campaigns to Maximize Clicks with a suitable maximum CPC limit can really help your campaigns take off quickly.
For example, you started a recipe blog website and just published a new guide on healthy swaps in your kitchen. Your primary goal is to drive as much traffic to that page as possible within your given budget.
Using the Maximize Clicks bid strategy will then aim to get you as many clicks to your site within that budget for the keywords you’re bidding on. Just remember to set a maximum CPC if you’re in a competitive industry!
Target Impression Share Bidding
This next PPC bidding strategy focuses mainly on the visibility of your campaigns, whereas the others focus on conversion or click-based outcomes.
Target Impression Share automatically sets bids to help ensure your ads achieve a specific impression share on the search results page.
You can choose your goal to show your ads:
At the absolute top of the page.
On the top of the page.
Anywhere on the page of the search results.
Using the Target Impression Share strategy can help boost your campaigns if:
Brand awareness is top of mind. If the campaign’s main goal is maintaining a solid presence on Google or increasing visibility for your brand, this strategy is for you.
You’re in a highly competitive market. In markets where competition is high and visibility to your brand is crucial.
You’re running top-of-funnel keywords. Similar to brand awareness, you may be targeting keywords that aren’t conversion-focused and want your brand to be top of mind when users first start their purchase journey.
For example, you just launched a new fashion brand and want to ensure your ads are visible in a highly competitive space.
Your goal is to appear at the top of the Google search results page for keywords like “summer fashion trends” or “stylish summer outfits for women.” By choosing Target Impression Share, you can choose how often you’re willing to appear at the top of the page for the keywords you’re bidding on.
Keep in mind that by using this bid strategy, you may see higher-than-average CPCs. That’s because you’re paying extra for that coveted top space on the search results page.
Another example is setting your brand campaign on Target Impression Share to ensure your core brand terms are always covered.
Results have been mixed in my experience, as sometimes I’ll just see inflated CPCs on terms where I would’ve seen lower CPCs utilizing Maximize Conversions or Maximize Clicks.
What About Manual Bidding?
Manual CPC bidding is still around – for now.
Google has not indicated that it is removing this option, but we can never guarantee that it will always be there.
As the name says, Manual CPC bidding means you set the maximum CPCs you’re willing to pay. They can be set at the campaign, ad group, or keyword level.
The reason many have transformed their PPC bidding strategies to more AI-powered strategies from Google is that human real-time bidding simply can’t keep up with machine learning.
There are still use cases for brands who need to use Manual CPC and continue to use it to this day. Especially for brands that don’t have conversion data or are running small accounts, some just opt into this model for managing their Google Ads campaigns.
So, there you have it – a breakdown of Google Ads’ AI-powered bid strategies and when to use them.
Remember, the key to PPC success is not just picking any strategy but choosing the right one for your specific goals and campaign needs.
Google’s machine learning outputs are usually the direct result of the inputs from the advertisers, so choose accordingly. And remember, they can be changed over time! Just make sure that your changes align with the overall business goals.
By understanding these strategies, you can make smarter decisions and get the most out of your PPC budget. Happy optimizing!
Amid the hoopla surrounding next month’s Prime Day, it’s worth remembering the marginal impact of that event on Amazon’s overall financial performance. Measured by bottom-line profit, Amazon in 2024 is mostly a cloud computing company.
Yet millions of merchants and consumers rely on Amazon’s marketplace. What follows is our analysis of the company’s overall financial performance and its plans for shoppers, sellers, logistics, and more.
Assessing Amazon’s financials requires a bit of scrutiny. The company, famously opaque for what it discloses (and doesn’t disclose), operates three components.
The first is physical and digital goods that it carries as inventory and sells directly to consumers either online or through its outlets such as Whole Foods Markets. Amazon calls this component “Product sales.”
Next is what the company calls “Service sales.” It consists of commissions from its massive marketplace and related fulfillment, shipping, and advertising revenue. Grouped into Service sales are Prime membership fees and, notably, fees to Amazon Web Services, its monster cloud-computing division.
For purposes of this article, however, AWS is a separate component given its size and profitability.
All told, the three components generated $143.3 billion in Q1 2024 revenue, a 13% increase from the first quarter a year earlier.
Operating income for Q1 2024 reached $15.3 billion, much higher than the $4.8 billion a year earlier.
Big-picture takeaways are this.
Amazon is highly dependent on AWS. The cloud division drove all operating income (net sales less operating expenses) in the first quarter last year and roughly 62% this year.
“Product sales,” while modestly growing, are likely only marginally profitable, at best, given the presumed cost of goods attached to that category. Amazon does not report operating income for Product sales alone.
“Service sales” (excluding AWS), with “Third-party seller services” (marketplace commissions and related), “Advertising services,” and “Subscription services” (Prime memberships, mostly), could easily be more profitable than “Products.” But, again, Amazon doesn’t separately report operating income for Services. Here’s the revenue breakout, however, for those items.
According to Marketplace Pulse, Amazon pockets more than 50% of marketplace seller revenue, up from 40% five years ago. A typical Amazon seller, per Marketplace Pulse, pays a 15% transaction fee, 20-35% in Fulfillment by Amazon fees, and up to 15% for advertising and promotions on Amazon. The total fees vary depending on the category, product price, size, weight, and the seller’s business model.
Delivery and AI
Amazon delivers to Prime members faster than ever, with more than 2 billion global packages arriving the same or next day in the first quarter. In March, across the top 60 largest U.S. metro areas, nearly 60% of Prime member orders arrived the same or the next day, and in London, Tokyo, and Toronto, three out of four items were delivered the same or the next day.
Whole Foods and Amazon Fresh now offer a grocery subscription service with unlimited delivery on orders over $35. The program is available to Prime members in more than 3,500 U.S. cities, as well as customers using an Electronic Benefits Transfer card, i.e., those using government benefits.
Amazon continued rolling out Rufus, its generative artificial intelligence shopping assistant, to millions of U.S. customers. The bot, still in beta, can answer shopping-related questions, compare and recommend products, and more. Amazon said it improved Rufus’s accuracy and response speed and added features, including “My Orders,” which answers questions such as “when did I last order coffee?” and “what dog treats did I last order?”
The company continues adding generative AI features for marketplace sellers. One new tool allows sellers to sync product listings from their own websites by providing a URL. The program parses the information from the websites to create “high-quality, engaging listings” on Amazon.
Profitability
Amazon reported net income (operating income less taxes and extraordinary items) of nearly $37.7 billion for the 12 months ending March 31, up 778% from $4.3 billion a year earlier. Yet the company sees further improvements ahead.
In the April earnings call, CEO Andrew Jassy stated he “doesn’t believe that we’re at the end of what we can do in terms of improving our cost structure on the Stores side [i.e., “Products sales”]. Yes, I think there are really unbelievable growth opportunities in front of us, and on the Stores profitability.”
He added, “We’re looking for ways to, again, turn over every rock, look at every process and everything that we do on the logistics side, and see how we can get our cost structure down and get speed and selection up. So, it’s working on a lot of fronts there, but cost is certainly front and center as we meet and improve customer experience.”
Profits grew immensely over the year, but the company’s operating margin percentages have not, which may be a driver of the cost concerns. Amazon reported a global net sales operating margin of 8% for the 12 months through March 31, compared to 2.5% a year earlier. That figure for North America totaled 5.2% through March and -0.1% a year earlier. The figures for international net sales improved to -0.4% from -6.6%.
Global Logistics
In September 2023, the company introduced Supply Chain by Amazon, offering third-party logistics worldwide.
“It really kind of, in some ways, mirrors some of the other businesses we’ve gotten involved in, AWS being an example of it,” Jassy said on the call.
The service helps sellers get items across borders and through customs. It also ships items from customs to various facilities, including allowing sellers to store items in warehouses that they can automatically replenish into Amazon’s fulfillment centers or move elsewhere.
“It turns out to be pretty hard work to actually import items from overseas, get them through customs and the border, and then ship them from that point to various facilities,” Jassy said. “We built that capability for ourselves first, and then we opened up those services as individual services to our sellers.”
Supply Chain by Amazon is “growing very significantly. It’s already what I would consider a reasonable-sized business,” Jassy said, adding that it’s still early for the low-capital program.
First, a confession. I only got into playing video games a little over a year ago (I know, I know). A Christmas gift of an Xbox Series S “for the kids” dragged me—pretty easily, it turns out—into the world of late-night gaming sessions. I was immediately attracted to open-world games, in which you’re free to explore a vast simulated world and choose what challenges to accept. Red Dead Redemption 2 (RDR2), an open-world game set in the Wild West, blew my mind. I rode my horse through sleepy towns, drank in the saloon, visited a vaudeville theater, and fought off bounty hunters. One day I simply set up camp on a remote hilltop to make coffee and gaze down at the misty valley below me.
To make them feel alive, open-world games are inhabited by vast crowds of computer-controlled characters. These animated people—called NPCs, for “nonplayer characters”—populate the bars, city streets, or space ports of games. They make these virtual worlds feel lived in and full. Often—but not always—you can talk to them.
In open-world games like Red Dead Redemption 2, players can choose diverse interactions within the same simulated experience.
After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game. It’s still fun—I had a whale of a time, honestly, looting stagecoaches, fighting in bar brawls, and stalking deer through rainy woods—but the illusion starts to weaken when you poke at it. It’s only natural. Video games are carefully crafted objects, part of a multibillion-dollar industry, that are designed to be consumed. You play them, you loot a few stagecoaches, you finish, you move on.
It may not always be like that. Just as it is upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end.
Startups employing generative-AI models, like ChatGPT, are using them to create characters that don’t rely on scripts but, instead, converse with you freely. Others are experimenting with NPCs who appear to have entire interior worlds, and who can continue to play even when you, the player, are not around to watch. Eventually, generative AI could create game experiences that are infinitely detailed, twisting and changing every time you experience them.
The field is still very new, but it’s extremely hot. In 2022 the venture firm Andreessen Horowitz launched Games Fund, a $600 million fund dedicated to gaming startups. A huge number of these are planning to use AI in gaming. And the firm, also known as A16Z, has now invested in two studios that are aiming to create their own versions of AI NPCs. A second $600 million round was announced in April 2024.
Early experimental demos of these experiences are already popping up, and it may not be long before they appear in full games like RDR2. But some in the industry believe this development will not just make future open-world games incredibly immersive; it could change what kinds of game worlds or experiences are even possible. Ultimately, it could change what it means to play.
“What comes after the video game? You know what I mean?” says Frank Lantz, a game designer and director of the NYU Game Center. “Maybe we’re on the threshold of a new kind of game.”
These guys just won’t shut up
The way video games are made hasn’t changed much over the years. Graphics are incredibly realistic. Games are bigger. But the way in which you interact with characters, and the game world around you, uses many of the same decades-old conventions.
“In mainstream games, we’re still looking at variations of the formula we’ve had since the 1980s,” says Julian Togelius, a computer science professor at New York University who has a startup called Modl.ai that does in-game testing. Part of that tried-and-tested formula is a technique called a dialogue tree, in which all of an NPC’s possible responses are mapped out. Which one you get depends on which branch of the dialogue tree you have chosen. For example, say something rude about a passing NPC in RDR2 and the character will probably lash out—you have to quickly apologize to avoid a shootout (unless that’s what you want).
In the most expensive, high-profile games, the so-called AAA games like Elden Ring or Starfield, a deeper sense of immersion is created by using brute force to build out deep and vast dialogue trees. The biggest studios employ teams of hundreds of game developers who work for many years on a single game in which every line of dialogue is plotted and planned, and software is written so the in-game engine knows when to deploy that particular line. RDR2 reportedly contains an estimated 500,000 lines of dialogue, voiced by around 700 actors.
“You get around the fact that you can [only] do so much in the world by, like, insane amounts of writing, an insane amount of designing,” says Togelius.
Generative AI is already helping take some of that drudgery out of making new games. Jonathan Lai, a general partner at A16Z and one of Games Fund’s managers, says that most studios are using image-generating tools like Midjourney to enhance or streamline their work. And in a 2023 survey by A16Z, 87% of game studios said they were already using AI in their workflow in some way—and 99% planned to do so in the future. Many use AI agents to replace the human testers who look for bugs, such as places where a game might crash. In recent months, the CEO of the gaming giant EA said generative AI could be used in more than 50% of its game development processes.
Ubisoft, one of the biggest game developers, famous for AAA open-world games such as Assassin’s Creed, has been using a large-language-model-based AI tool called Ghostwriter to do some of the grunt work for its developers in writing basic dialogue for its NPCs. Ghostwriter generates loads of options for background crowd chatter, which the human writer can pick from or tweak. The idea is to free the humans up so they can spend that time on more plot-focused writing.
GEORGE WYLESOL
Ultimately, though, everything is scripted. Once you spend a certain number of hours on a game, you will have seen everything there is to see, and completed every interaction. Time to buy a new one.
But for startups like Inworld AI, this situation is an opportunity. Inworld, based in California, is building tools to make in-game NPCs that respond to a player with dynamic, unscripted dialogue and actions—so they never repeat themselves. The company, now valued at $500 million, is the best-funded AI gaming startup around thanks to backing from former Google CEO Eric Schmidt and other high-profile investors.
Role-playing games give us a unique way to experience different realities, explains Kylan Gibbs, Inworld’s CEO and founder. But something has always been missing. “Basically, the characters within there are dead,” he says.
“When you think about media at large, be it movies or TV or books, characters are really what drive our ability to empathize with the world,” Gibbs says. “So the fact that games, which are arguably the most advanced version of storytelling that we have, are lacking these live characters—it felt to us like a pretty major issue.”
Gamers themselves were pretty quick to realize that LLMs could help fill this gap. Last year, some came up with ChatGPT mods (a way to alter an existing game) for the popular role-playing game Skyrim. The mods let players interact with the game’s vast cast of characters using LLM-powered free chat. One mod even included OpenAI’s speech recognition software Whisper AI so that players could speak to the players with their voices, saying whatever they wanted, and have full conversations that were no longer restricted by dialogue trees.
The results gave gamers a glimpse of what might be possible but were ultimately a little disappointing. Though the conversations were open-ended, the character interactions were stilted, with delays while ChatGPT processed each request.
Inworld wants to make this type of interaction more polished. It’s offering a product for AAA game studios in which developers can create the brains of an AI NPC that can be then imported into their game. Developers use the company’s “Inworld Studio” to generate their NPC. For example, they can fill out a core description that sketches the character’s personality, including likes and dislikes, motivations, or useful backstory. Sliders let you set levels of traits such as introversion or extroversion, insecurity or confidence. And you can also use free text to make the character drunk, aggressive, prone to exaggeration—pretty much anything.
Developers can also add descriptions of how their character speaks, including examples of commonly used phrases that Inworld’s various AI models, including LLMs, then spin into dialogue in keeping with the character.
“Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended.”
Jeff Orkin, founder, Bitpart
Game designers can also plug other information into the system: what the character knows and doesn’t know about the world (no Taylor Swift references in a medieval battle game, ideally) and any relevant safety guardrails (does your character curse or not?). Narrative controls will let the developers make sure the NPC is sticking to the story and isn’t wandering wildly off-base in its conversation. The idea is that the characters can then be imported into video-game graphics engines like Unity or Unreal Engine to add a body and features. Inworld is collaborating with the text-to-voice startup ElevenLabs to add natural-sounding voices.
Inworld’s tech hasn’t appeared in any AAA games yet, but at the Game Developers Conference (GDC) in San Francisco in March 2024, the firm unveiled an early demo with Nvidia that showcased some of what will be possible. In Covert Protocol, each player operates as a private detective who must solve a case using input from the various in-game NPCs. Also at the GDC, Inworld unveiled a demo called NEO NPC that it had worked on with Ubisoft. In NEO NPC, a player could freely interact with NPCs using voice-to-text software and use conversation to develop a deeper relationship with them.
LLMs give us the chance to make games more dynamic, says Jeff Orkin, founder of Bitpart, a new startup that also aims to create entire casts of LLM-powered NPCs that can be imported into games. “Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended,” he says.
Bitpart’s approach is in part inspired by Orkin’s PhD research at MIT’s Media Lab. There, he trained AIs to role-play social situations using game-play logs of humans doing the same things with each other in multiplayer games.
Bitpart’s casts of characters are trained using a large language model and then fine-tuned in a way that means the in-game interactions are not entirely open-ended and infinite. Instead, the company uses an LLM and other tools to generate a script covering a range of possible interactions, and then a human game designer will select some. Orkin describes the process as authoring the Lego bricks of the interaction. An in-game algorithm searches out specific bricks to string them together at the appropriate time.
Bitpart’s approach could create some delightful in-game moments. In a restaurant, for example, you might ask a waiter for something, but the bartender might overhear and join in. Bitpart’s AI currently works with Roblox. Orkin says the company is now running trials with AAA game studios, although he won’t yet say which ones.
But generative AI might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.
Making the impossible possible
When I asked Frank Lantz about how AI could change gaming, he talked for 26 minutes straight. His initial reaction to generative AI had been visceral: “I was like, oh my God, this is my destiny and is what I was put on the planet for.”
Lantz has been in and around the cutting edge of the game industry and AI for decades but received a cult level of acclaim a few years ago when he created the Universal Paperclips game. The simple in-browser game gives the player the job of producing as many paper clips as possible. It’s a riff on the famous thought experiment by the philosopher Nick Bostrom, which imagines an AI that is given the same task and optimizes against humanity’s interest by turning all the matter in the known universe into paper clips.
Lantz is bursting with ideas for ways to use generative AI. One is to experience a new work of art as it is being created, with the player participating in its creation. “You’re inside of something like Lord of the Rings as it’s being written. You’re inside a piece of literature that is unfolding around you in real time,” he says. He also imagines strategy games where the players and the AI work together to reinvent what kind of game it is and what the rules are, so it is never the same twice.
For Orkin, LLM-powered NPCs can make games unpredictable—and that’s exciting. “It introduces a lot of open questions, like what you do when a character answers you but that sends a story in a direction that nobody planned for,” he says.
Generative A I might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.
It might mean games that are unlike anything we’ve seen thus far. Gaming experiences that unspool as the characters’ relationships shift and change, as friendships start and end, could unlock entirely new narrative experiences that are less about action and more about conversation and personalities.
Togelius imagines new worlds built to react to the player’s own wants and needs, populated with NPCs that the player must teach or influence as the game progresses. Imagine interacting with characters whose opinions can change, whom you could persuade or motivate to act in a certain way—say, to go to battle with you. “A thoroughly generative game could be really, really good,” he says. “But you really have to change your whole expectation of what a game is.”
Lantz is currently working on a prototype of a game in which the premise is that you—the player—wake up dead, and the afterlife you are in is a low-rent, cheap version of a synthetic world. The game plays out like a noir in which you must explore a city full of thousands of NPCs powered by a version of ChatGPT, whom you must interact with to work out how you ended up there.
His early experiments gave him some eerie moments when he felt that the characters seemed to know more than they should, a sensation recognizable to people who have played with LLMs before. Even though you know they’re not alive, they can still freak you out a bit.
“If you run electricity through a frog’s corpse, the frog will move,” he says. “And if you run $10 million worth of computation through the internet … it moves like a frog, you know.”
But these early forays into generative-AI gaming have given him a real sense of excitement for what’s next: “I felt like, okay, this is a thread. There really is a new kind of artwork here.”
If an AI NPC talks and no one is around to listen, is there a sound?
AI NPCs won’t just enhance player interactions—they might interact with one another in weird ways. Red Dead Redemption 2’s NPCs each have long, detailed scripts that spell out exactly where they should go, what work they must complete, and how they’d react if anything unexpected occurred. If you want, you can follow an NPC and watch it go about its day. It’s fun, but ultimately it’s hard-coded.
NPCs built with generative AI could have a lot more leeway—even interacting with one another when the player isn’t there to watch. Just as people have been fooled into thinking LLMs are sentient, watching a city of generated NPCs might feel like peering over the top of a toy box that has somehow magically come alive.
We’re already getting a sense of what this might look like. At Stanford University, Joon Sung Park has been experimenting with AI-generated characters and watching to see how their behavior changes and gains complexity as they encounter one another.
Because large language models have sucked up the internet and social media, they actually contain a lot of detail about how we behave and interact, he says.
Gamers came up with ChatGPT mods for the popular role-playing game Skyrim.
Although 2016’s hugely hyped No Man’s Sky used procedural generation to create endless planets to explore, many saw it as a letdown.
In Covert Protocol, players operate as private detectives who must solve the case using input from various in-game NPCs
In Park’s recent research, he and colleagues set up a Sims-like game, called Smallville, with 25 simulated characters that had been trained using generative AI. Each was given a name and a simple biography before being set in motion. When left to interact with each other for two days, they began to exhibit humanlike conversations and behavior, including remembering each other and being able to talk about their past interactions.
For example, the researchers prompted one character to organize a Valentine’s Day party—and then let the simulation run. That character sent invitations around town, while other members of the community asked each other on dates to go to the party, and all turned up at the venue at the correct time. All of this was carried out through conversations, and past interactions between characters were stored in their “memories” as natural language.
For Park, the implications for gaming are huge. “This is exactly the sort of tech that the gaming community for their NPCs have been waiting for,” he says.
His research has inspired games like AI Town, an open-source interactive experience on GitHub that lets human players interact with AI NPCs in a simple top-down game. You can leave the NPCs to get along for a few days and check in on them, reading the transcripts of the interactions they had while you were away. Anyone is free to take AI Town’s code to build new NPC experiences through AI.
For Daniel De Freitas, cofounder of the startup Character AI, which lets users generate and interact with their own LLM-powered characters, the generative-AI revolution will allow new types of games to emerge—ones in which the NPCs don’t even need human players.
The player is “joining an adventure that is always happening, that the AIs are playing,” he imagines. “It’s the equivalent of joining a theme park full of actors, but unlike the actors, they truly ‘believe’ that they are in those roles.”
If you’re getting Westworld vibes right about now, you’re not alone. There are plenty of stories about people torturing or killing their simple Sims characters in the game for fun. Would mistreating NPCs that pass for real humans cross some sort of new ethical boundary? What if, Lantz asks, an AI NPC that appeared conscious begged for its life when you simulated torturing it?
It raises complex questions he adds. “One is: What are the ethical dimensions of pretend violence? And the other is: At what point do AIs become moral agents to which harm can be done?”
There are other potential issues too. An immersive world that feels real, and never ends, could be dangerously addictive. Some users of AI chatbots have already reported losing hours and even days in conversation with their creations. Are there dangers that the same parasocial relationships could emerge with AI NPCs?
“We may need to worry about people forming unhealthy relationships with game characters at some point,” says Togelius. Until now, players have been able to differentiate pretty easily between game play and real life. But AI NPCs might change that, he says: “If at some point what we now call ‘video games’ morph into some all-encompassing virtual reality, we will probably need to worry about the effect of NPCs being too good, in some sense.”
A portrait of the artist as a young bot
Not everyone is convinced that never-ending open-ended conversations between the player and NPCs are what we really want for the future of games.
“I think we have to be cautious about connecting our imaginations with reality,” says Mike Cook, an AI researcher and game designer. “The idea of a game where you can go anywhere, talk to anyone, and do anything has always been a dream of a certain kind of player. But in practice, this freedom is often at odds with what we want from a story.”
In other words, having to generate a lot of the dialogue yourself might actually get kind of … well, boring. “If you can’t think of interesting or dramatic things to say, or are simply too tired or bored to do it, then you’re going to basically be reading your own very bad creative fiction,” says Cook.
Orkin likewise doesn’t think conversations that could go anywhere are actually what most gamers want. “I want to play a game that a bunch of very talented, creative people have really thought through and created an engaging story and world,” he says.
This idea of authorship is an important part of game play, agrees Togelius. “You can generate as much as you want,” he says. “But that doesn’t guarantee that anything is interesting and worth keeping. In fact, the more content you generate, the more boring it might be.”
GEORGE WYLESOL
Sometimes, the possibility of everything is too much to cope with. No Man’s Sky, a hugely hyped space game launched in 2016 that used algorithms to generate endless planets to explore, was seen by many players as a bit of a letdown when it finally arrived. Players quickly discovered that being able to explore a universe that never ended, with worlds that were endlessly different, actually fell a little flat. (A series of updates over subsequent years has made No Man’s Sky a little more structured, and it’s now generally well thought of.)
One approach might be to keep AI gaming experiences tight and focused.
Hilary Mason, CEO at the gaming startup Hidden Door, likes to joke that her work is “artisanal AI.” She is from Brooklyn, after all, says her colleague Chris Foster, the firm’s game director, laughing.
Hidden Door, which has not yet released any products, is making role-playing text adventures based on classic stories that the user can steer. It’s like Dungeons & Dragons for the generative AI era. It stitches together classic tropes for certain adventure worlds, and an annotated database of thousands of words and phrases, and then uses a variety of machine-learning tools, including LLMs, to make each story unique. Players walk through a semi-unstructured storytelling experience, free-typing into text boxes to control their character.
The result feels a bit like hand-annotating an AI-generated novel with Post-it notes.
In a demo with Mason, I got to watch as her character infiltrated a hospital and attempted to hack into the server. Each suggestion prompted the system to spin up the next part of the story, with the large language model creating new descriptions and in-game objects on the fly.
Each experience lasts between 20 and 40 minutes, and for Foster, it creates an “expressive canvas” that people can play with. The fixed length and the added human touch—Mason’s artisanal approach—give players “something really new and magical,” he says.
There’s more to life than games
Park thinks generative AI that makes NPCs feel alive in games will have other, more fundamental implications further down the line.
“This can, I think, also change the meaning of what games are,” he says.
For example, he’s excited about using generative-AI agents to simulate how real people act. He thinks AI agents could one day be used as proxies for real people to, for example, test out the likely reaction to a new economic policy. Counterfactual scenarios could be plugged in that would let policymakers run time backwards to try to see what would have happened if a different path had been taken.
“You want to learn that if you implement this social policy or economic policy, what is going to be the impact that it’s going to have on the target population?” he suggests. “Will there be unexpected side effects that we’re not going to be able to foresee on day one?”
And while Inworld is focused on adding immersion to video games, it has also worked with LG in South Korea to make characters that kids can chat with to improve their English language skills. Others are using Inworld’s tech to create interactive experiences. One of these, called Moment in Manzanar, was created to help players empathize with the Japanese-Americans the US government detained in internment camps during World War II. It allows the user to speak to a fictional character called Ichiro who talks about what it was like to be held in the Manzanar camp in California.
Inworld’s NPC ambitions might be exciting for gamers (my future excursions as a cowboy could be even more immersive!), but there are some who believe using AI to enhance existing games is thinking too small. Instead, we should be leaning into the weirdness of LLMs to create entirely new kinds of experiences that were never possible before, says Togelius. The shortcomings of LLMs “are not bugs—they’re features,” he says.
Lantz agrees. “You have to start with the reality of what these things are and what they do—this kind of latent space of possibilities that you’re surfing and exploring,” he says. “These engines already have that kind of a psychedelic quality to them. There’s something trippy about them. Unlocking that is the thing that I’m interested in.”
Whatever is next, we probably haven’t even imagined it yet, Lantz thinks.
“And maybe it’s not about a simulated world with pretend characters in it at all,” he says. “Maybe it’s something totally different. I don’t know. But I’m excited to find out.”
In a clean room in his lab, Sean Moore peers through a microscope at a bit of intestine, its dark squiggles and rounded structures standing out against a light gray background. This sample is not part of an actual intestine; rather, it’s human intestinal cells on a tiny plastic rectangle, one of 24 so-called “organs on chips” his lab bought three years ago.
Moore, a pediatric gastroenterologist at the University of Virginia School of Medicine, hopes the chips will offer answers to a particularly thorny research problem. He studies rotavirus, a common infection that causes severe diarrhea, vomiting, dehydration, and even death in young children. In the US and other rich nations, up to 98% of the children who are vaccinated against rotavirus develop lifelong immunity. But in low-income countries, only about a third of vaccinated children become immune. Moore wants to know why.
His lab uses mice for some protocols, but animal studies are notoriously bad at identifying human treatments. Around 95% of the drugs developed through animal research fail in people. Researchers have documented this translation gap since at least 1962. “All these pharmaceutical companies know the animal models stink,” says Don Ingber, founder of the Wyss Institute for Biologically Inspired Engineering at Harvard and a leading advocate for organs on chips. “The FDA knows they stink.”
But until recently there was no other option. Research questions like Moore’s can’t ethically or practically be addressed with a randomized, double-blinded study in humans. Now these organs on chips, also known as microphysiological systems, may offer a truly viable alternative. They look remarkably prosaic: flexible polymer rectangles about the size of a thumb drive. In reality they’re triumphs of bioengineering, intricate constructions furrowed with tiny channels that are lined with living human tissues. These tissues expand and contract with the flow of fluid and air, mimicking key organ functions like breathing, blood flow, and peristalsis, the muscular contractions of the digestive system.
More than 60 companies now produce organs on chips commercially, focusing on five major organs: liver, kidney, lung, intestines, and brain. They’re already being used to understand diseases, discover and test new drugs, and explore personalized approaches to treatment.
As they continue to be refined, they could solve one of the biggest problems in medicine today. “You need to do three things when you’re making a drug,” says Lorna Ewart, a pharmacologist and chief scientific officer of Emulate, a biotech company based in Boston. “You need to show it’s safe. You need to show it works. You need to be able to make it.”
All new compounds have to pass through a preclinical phase, where they’re tested for safety and effectiveness before moving to clinical trials in humans. Until recently, those tests had to run in at least two animal species—usually rats and dogs—before the drugs were tried on people.
But in December 2022, President Biden signed the FDA Modernization Act, which amended the original FDA Act of 1938. With a few small word changes, the act opened the door for non-animal-based testing in preclinical trials. Anything that makes it faster and easier for pharmaceutical companies to identify safe and effective drugs means better, potentially cheaper treatments for all of us.
Moore, for one, is banking on it, hoping the chips help him and his colleagues shed light on the rotavirus vaccine responses that confound them. “If you could figure out the answer,” he says, “you could save a lot of kids’ lives.”
While many teams have worked on organ chips over the last 30 years, the OG in the field is generally acknowledged to be Michael Shuler, a professor emeritus of chemical engineering at Cornell. In the 1980s, Shuler was a math and engineering guy who imagined an “animal on a chip,” a cell culture base seeded with a variety of human cells that could be used for testing drugs. He wanted to position a handful of different organ cells on the same chip, linked to one another, which could mimic the chemical communication between organs and the way drugs move through the body. “This was science fiction,” says Gordana Vunjak-Novakovic, a professor of biomedical engineering at Columbia University whose lab works with cardiac tissue on chips. “There was no body on a chip. There is still no body on a chip. God knows if there will ever be a body on a chip.”
Shuler had hoped to develop a computer model of a multi-organ system, but there were too many unknowns. The living cell culture system he dreamed up was his bid to fill in the blanks. For a while he played with the concept, but the materials simply weren’t good enough to build what he imagined.
“You can force mice to menstruate, but it’s not really menstruation. You need the human being.”
Linda Griffith, founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant”
He wasn’t the only one working on the problem. Linda Griffith, a founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant,” designed a crude early version of a liver chip in the late 1990s: a flat silicon chip, just a few hundred micrometers tall, with endothelial cells, oxygen and liquid flowing in and out via pumps, silicone tubing, and a polymer membrane with microscopic holes. She put liver cells from rats on the chip, and those cells organized themselves into three-dimensional tissue. It wasn’t a liver, but it modeled a few of the things a functioning human liver could do. It was a start.
Griffith, who rides a motorcycle for fun and speaks with a soft Southern accent, suffers from endometriosis, an inflammatory condition where cells from the lining of the uterus grow throughout the abdomen. She’s endured decades of nausea, pain, blood loss, and repeated surgeries. She never took medical leaves, instead loading up on Percocet, Advil, and margaritas, keeping a heating pad and couch in her office—a strategy of necessity, as she saw no other choice for a working scientist. Especially a woman.
And as a scientist, Griffith understood that the chronic diseases affecting women tend to be under-researched, underfunded, and poorly treated. She realized that decades of work with animals hadn’t done a damn thing to make life better for women like her. “We’ve got all this data, but most of that data does not lead to treatments for human diseases,” she says. “You can force mice to menstruate, but it’s not really menstruation. You need the human being.”
Or, at least, the human cells. Shuler and Griffith, and other scientists in Europe, worked on some of those early chips, but things really kicked off around 2009, when Don Ingber’s lab in Cambridge, Massachusetts, created the first fully functioning organ on a chip. That “lung on a chip” was made from flexible silicone rubber, lined with human lung cells and capillary blood vessel cells that “breathed” like the alveoli—tiny air sacs—in a human lung. A few years later Ingber, an MD-PhD with the tidy good looks of a younger Michael Douglas, founded Emulate, one of the earliest biotech companies making microphysiological systems. Since then he’s become a kind of unofficial ambassador for in vitro technologies in general and organs on chips in particular, giving hundreds of talks, scoring millions in grant money, repping the field with scientists and laypeople. Stephen Colbert once ragged on him after the New York Times quoted him as describing a chip that “walks, talks, and quacks like a human vagina,” a quote Ingber says was taken out of context.
Ingber began his career working on cancer. But he struggled with the required animal research. “I really didn’t want to work with them anymore, because I love animals,” he says. “It was a conscious decision to focus on in vitro models.” He’s not alone; a growing number of young scientists are speaking up about the distress they feel when research protocols cause pain, trauma, injury, and death to lab animals. “I’m a master’s degree student in neuroscience and I think about this constantly. I’ve done such unspeakable, horrible things to mice all in the name of scientific progress, and I feel guilty about this every day,” wrote one anonymous student on Reddit. (Full disclosure: I switched out of a psychology major in college because I didn’t want to cause harm to animals.)
Emulate is one of the companies building organ-on-a-chip technology. The devices combine live human cells with a microenvironment designed to emulate specific tissues.
EMULATE
Taking an undergraduate art class led Ingber to an epiphany: mechanical forces are just as important as chemicals and genes in determining the way living creatures work. On a shelf in his office he still displays a model he built in that art class, a simple construction of sticks and fishing line, which helped him realize that cells pull and twist against each other. That realization foreshadowed his current work and helped him design dynamic microfluidic devices that incorporated shear and flow.
Ingber coauthored a 2022 paper that’s sometimes cited as a watershed in the world of organs on chips. Researchers used Emulate’s liver chips to reevaluate 27 drugs that had previously made it through animal testing and had then gone on to kill 242 people and necessitate more than 60 liver transplants. The liver chips correctly flagged problems with 22 of the 27 drugs, an 87% success rate compared with a 0% success rate for animal testing. It was the first time organs on chips had been directly pitted against animal models, and the results got a lot of attention from the pharmaceutical industry. Dan Tagle, director of the Office of Special Initiatives for the National Center for Advancing Translational Sciences (NCATS), estimates that drug failures cost around $2.6 billion globally each year. The earlier in the process failing compounds can be weeded out, the more room there is for other drugs to succeed.
“The capacity we have to test drugs is more or less fixed in this country,” says Shuler, whose company, Hesperos, also manufactures organs on chips. “There are only so many clinical trials you can do. So if you put a loser into the system, that means something that could have won didn’t get into the system. We want to change the success rate from clinical trials to a much higher number.”
In 2011, the National Institutes of Health established NCATS and started investing in organs on chips and other in vitro technologies. Other government funders, like the Defense Advanced Research Projects Agency and the Food and Drug Administration, have followed suit. For instance, NIH recently funded NASA scientists to send heart tissue on chips into space. Six months in low gravity ages the cardiovascular system 10 years, so this experiment lets researchers study some of the effects of aging without harming animals or humans.
Scientists have made liver chips, brain chips, heart chips, kidney chips, intestine chips, and even a female reproductive system on a chip (with cells from ovaries, fallopian tubes, and uteruses that release hormones and mimic an actual 28-day menstrual cycle). Each of these chips exhibits some of the specific functions of the organs in question. Cardiac chips, for instance, contain heart cells that beat just like heart muscle, making it possible for researchers to model disorders like cardiomyopathy.
Shuler thinks organs on chips will revolutionize the world of research for rare diseases. “It is a very good model when you don’t have enough patients for normal clinical trials and you don’t have a good animal model,” he says. “So it’s a way to get drugs to people that couldn’t be developed in our current pharmaceutical model.” Shuler’s own biotech company used organs on chips to test a potential drug for myasthenia gravis, a rare neurological disorder. In 2022,the FDA approved the drug for clinical trials based on that data—one of six Hesperos drugs that have so far made it to that stage.
Each chip starts with a physiologically based pharmacokinetic model, known as a PBPK model—a mathematical expression of how a chemical compound behaves in a human body. “We try and build a physical replica of the mathematical model of what really occurs in the body,” explains Shuler. That model guides the way the chip is designed, re-creating the amount of time a fluid or chemical stays in that particular organ—what’s known as the residence time. “As long as you have the same residence time, you should get the same response in terms of chemical conversion,” he says.
Tiny channels on each chip, each between 10 and 100 microns in diameter, help bring fluids and oxygen to the cells. “When you get down to less than one micron, you can’t use normal fluid dynamics,” says Shuler. And fluid dynamics matters, because if the fluid moves through the device too quickly, the cells might die; too slowly, and the cells won’t react normally.
Chip technology, while sophisticated, has some downsides. One of them is user friendliness. “We need to get rid of all this tubing and pumps and make something that’s as simple as a well plate for culturing cells,” says Vunjak-Novakovic. Her lab and others are working on simplifying the design and function of such chips so they’re easier to operate and are compatible with robots, which do repetitive tasks like pipetting in many labs.
Cost and sourcing can also be challenging. Emulate’s base model, which looks like a simple rectangular box from the outside,starts at around $100,000 and rises steeply from there. Most human cells come from commercial suppliers that arrange for donations from hospital patients. During the pandemic, when people had fewer elective surgeries, many of those sources dried up. As microphysiological systems become more mainstream, finding reliable sources of human cells will be critical.
“As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore— we could go with chip plus one animal.”
Lorna Ewart, Chief Scientific Officer, Emulate
Another challenge is that every company producing organs on chips uses its own proprietary methods and technologies. Ingber compares the landscape to the early days of personal computing, when every company developed its own hardware and software, and none of them meshed well. For instance, the microfluidic systems in Emulate’s intestine chips are fueled by micropumps, while those made by Mimetas, another biotech company, use an electronic rocker and gravity to circulate fluids and air. “This is not an academic lab type of challenge,” emphasizes Ingber. “It’s a commercial challenge. There’s no way you can get the same results anywhere in the world with individual academics making [organs on chips], so you have to have commercialization.”
Namandje Bumpus, the FDA’s chief scientist, agrees. “You can find differences [in outcomes] depending even on what types of reagents you’re using,” she says. Those differences mean research can’t be easily reproduced, which diminishes its validity and usefulness. “It would be great to have some standardization,” she adds.
On the plus side, the chip technology could help researchers address some of the most deeply entrenched health inequities in science. Clinical trials have historically recruited white men, underrepresenting people of color, women (especially pregnant and lactating women), the elderly, and other groups. And treatments derived from those trials all too often fail in members of those underrepresented groups, as in Moore’s rotavirus vaccine mystery. “With organs on a chip, you may be able to create systems by which you are very, very thoughtful—where you spread the net wider than has ever been done before,” says Moore.
This microfluidic platform, designed by MIT engineers, connects engineered tissue from up to 10 organs.
FELICE FRANKEL
Another advantage is that chips will eventually reduce the need for animals in the lab even as they lead to better human outcomes. “There are aspects of animal research that make all of us uncomfortable, even people that do it,” acknowledges Moore. “The same values that make us uncomfortable about animal research are also the same values that make us uncomfortable with seeing human beings suffer with diseases that we don’t have cures for yet. So we always sort of balance that desire to reduce suffering in all the forms that we see it.”
Lorna Ewart, who spent 20 years at the pharma giant AstraZeneca before joining Emulate, thinks we’re entering a kind of transition time in research, in which scientists use in vitro technologies like organs on chips alongside traditional cell culture methods and animals. “As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore—we could go with chip plus one animal,” she says.
In the meantime, Sean Moore is excited about incorporating intestine chips more and more deeply into his research. His lab has been funded by the Gates Foundation to do what he laughingly describes as a bake-off between intestine chips made by Emulate and Mimetas. They’re infecting the chips with different strains of rotavirus to try to identify the pros and cons of each company’s design. It’s too early for any substantive results, but Moore says he does have data showing that organ chips are a viable model for studying rotavirus infection. That could ultimately be a real game-changer in his lab and in labs around the world.
“There’s more players in the space right now,” says Moore. “And that competition is going to be a healthy thing.”
Harriet Brown writes about health, medicine, and science. Her most recent book is Shadow Daughter: A Memoir of Estrangement. She’s a professor of magazine, news, and digital journalism at Syracuse University’s Newhouse School.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Earlier this week, the US surgeon general, also known as the “nation’s doctor,” authored an article making the case that health warnings should accompany social media. The goal: to protect teenagers from its harmful effects. “Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” Vivek Murthy wrote in a piece published in the New York Times. “Additionally, nearly half of adolescents say social media makes them feel worse about their bodies.”
His concern instinctively resonates with me. I’m in my late 30s, and even I can end up feeling a lot worse about myself after a brief stint on Instagram. I have two young daughters, and I worry about how I’ll respond when they reach adolescence and start asking for access to whatever social media site their peers are using. My children already have a fascination with cell phones; the eldest, who is almost six, will often come into my bedroom at the crack of dawn, find my husband’s phone, and somehow figure out how to blast “Happy Xmas (War Is Over)” at full volume.
But I also know that the relationship between this technology and health isn’t black and white. Social media can affect users in different ways—often positively. So let’s take a closer look at the concerns, the evidence behind them, and how best to tackle them.
Murthy’s concerns aren’t new, of course. In fact, almost any time we are introduced to a new technology, some will warn of its potential dangers. Innovations like the printing press, radio, and television all had their critics back in the day. In 2009, the Daily Maillinked Facebook use to cancer.
More recently, concerns about social media have centered on young people. There’s a lot going on in our teenage years as our brains undergo maturation, our hormones shift, and we explore new ways to form relationships with others. We’re thought to be more vulnerable to mental-health disorders during this period too. Around half of such disorders are thought to develop by the age of 14, and suicide is the fourth-leading cause of death in people aged between 15 and 19, according to the World Health Organization. Many have claimed that social media only makes things worse.
Reports have variously cited cyberbullying, exposure to violent or harmful content, and the promotion of unrealistic body standards, for example, as potential key triggers of low mood and disorders like anxiety and depression. There have also been several high-profile cases of self-harm and suicide with links to social media use, often involving online bullying and abuse. Just this week, the suicide of an 18-year-old in Kerala, India, was linked to cyberbullying. And children have died after taking part in dangerous online challenges made viral on social media, whether from inhaling toxic substances, consuming ultra-spicy tortilla chips, or choking themselves.
Murthy’s new article follows an advisory on social media and youth mental health published by his office in 2023. The 25-page document, which lays out some of known benefits and harms of social media use as well as the “unknowns,” was intended to raise awareness of social media as a health issue. The problem is that things are not entirely clear cut.
“The evidence is currently quite limited,” says Ruth Plackett, a researcher at University College London who studies the impact of social media on mental health in young people. A lot of the research on social media and mental health is correlational. It doesn’t show that social media use causes mental health disorders, Plackett says.
The surgeon general’s advisory cites some of these correlational studies. It also points to survey-based studies, including one looking at mental well-being among college students after the rollout of Facebook in the mid-2000s. But even if you accept the authors’ conclusion that Facebook had a negative impact on the students’ mental health, it doesn’t mean that other social media platforms will have the same effect on other young people. Even Facebook, and the way we use it, has changed a lot in the last 20 years.
Other studies have found that social media has no effect on mental health. In a study published last year, Plackett and her colleagues surveyed 3,228 children in the UK to see how their social media use and mental well-being changed over time. The children were first surveyed when they were aged between 12 and 13, and again when they were 14 to 15 years old.
Plackett expected to find that social media use would harm the young participants. But when she conducted the second round of questionnaires, she found that was not the case. “Time spent on social media was not related to mental-health outcomes two years later,” she tells me.
Other research has found that social media use can be beneficial to young people, especially those from minority groups. It can help some avoid loneliness, strengthen relationships with their peers, and find a safe space to express their identities, says Plackett. Social media isn’t only for socializing, either. Today, young people use these platforms for news, entertainment, school, and even (in the case of influencers) business.
“It’s such a mixed bag of evidence,” says Plackett. “I’d say it’s hard to draw much of a conclusion at the minute.”
In his article, Murthy calls for a warning label to be applied to social media platforms, stating that “social media is associated with significant mental-health harms for adolescents.”
But while Murthy draws comparisons to the effectiveness of warning labels on tobacco products, bingeing on social media doesn’t have the same health risks as chain-smoking cigarettes. We have plenty of strong evidence linking smoking to a range of diseases, including gum disease, emphysema, and lung cancer, among others. We know that smoking can shorten a person’s life expectancy. We can’t make any such claims about social media, no matter what was written in that Daily Mail article.
Health warnings aren’t the only way to prevent any potential harms associated with social media use, as Murthy himself acknowledges. Tech companies could go further in reducing or eliminating violent and harmful content, for a start. And digital literacy education could help inform children and their caregivers how to alter the settings on various social media platforms to better control the content children see, and teach them how to assess the content that does make it to their screens.
I like the sound of these measures. They might even help me put an end to the early-morning Christmas songs.
Now read the rest of The Checkup
Read more from MITTechnology Review’s archive:
Bills designed to make the internet safer for children have been popping up across the US. But individual states take different approaches, leaving the resulting picture a mess, as Tate Ryan-Mosley explored.
Dozens of US states sued Meta, the parent company of Facebook, last October. As Tate wrote at the time, the states claimed that the company knowingly harmed young users, misled them about safety features and harmful content, and violated laws on children’s privacy.
China has been implementing increasingly tight controls over how children use the internet. In August last year, the country’s cyberspace administrator issued detailed guidelines that include, for example, a rule to limit use of smart devices to 40 minutes a day for children under the age of eight. And even that use should be limited to content about “elementary education, hobbies and interests, and liberal arts education.” My colleague Zeyi Yang had the story in a previous edition of his weekly newsletter, China Report.
Last year, TikTok set a 60-minute-per-day limit for users under the age of 18. But the Chinese domestic version of the app, Douyin, has even tighter controls, as Zeyi wrote last March.
One way that social media can benefit young people is by allowing them to express their identities in a safe space. Filters that superficially alter a person’s appearance to make it more feminine or masculine can help trans people play with gender expression, as Elizabeth Anne Brown wrote in 2022. She quoted Josie, a trans woman in her early 30s. “The Snapchat girl filter was the final straw in dropping a decade’s worth of repression,” Josie said. “[I] saw something that looked more ‘me’ than anything in a mirror, and I couldn’t go back.”
From around the web
Could gentle shock waves help regenerate heart tissue? A trial of what’s being dubbed a “space hairdryer” suggests the treatment could help people recover from bypass surgery. (BBC)
“We don’t know what’s going on with this virus coming out of China right now.” Anthony Fauci gives his insider account of the first three months of the covid-19 pandemic. (The Atlantic)
Microplastics are everywhere. It was only a matter of time before scientists found them in men’s penises. (The Guardian)
Is the singularity nearer? Ray Kurzweil believes so. He also thinks medical nanobots will allow us to live beyond 120. (Wired)
Natalie Mounter is a stay-at-home mom and entrepreneur. Her company, Totally Dazzled, sells craft supplies for weddings and creative projects. She says the business has prospered due to other stay-at-home moms — her employees and affiliates.
She told me, “Being a stay-at-home mom and hiring other stay-at-home moms has been key to our success.”
Mounter and I recently spoke. She shared Totally Dazzled’s origins, its affiliate marketing success, and the value of smart, hardworking moms.
The entire audio of our conversation is embedded below. The transcript is edited for length and clarity.
Eric Bandholz: Give us the rundown of what you do.
Natalie Mounter: I’m the owner of Totally Dazzled. We sell sparkly craft supplies, mainly for weddings and creative projects. We launched in 2012 with an Etsy store. We are now 100% Shopify. It’s a lifestyle business for me. I started it when I was pregnant with my first kid.
We experienced huge growth during the pandemic because everyone started buying online and was looking for things to do at home. The crafting market took off.
We ordered much more inventory and ended up being overstocked. We had to do a lot of discounting this year to get back to a healthy inventory position. We’re in a better place now.
I started the business because I was looking for something to do at home and work the hours I wanted. I read “The 4-hour Work Week” and thought it sounded perfect for motherhood.
As the business grew, I was fortunate to find key people who were independent and hardworking, who I could offload stuff onto and keep my hours to a minimum. Being a stay-at-home mom and hiring other stay-at-home moms has been key to our success. I get quality employees for an affordable price. Moms are the best team members because they’re smart, hardworking, and very efficient.
Bandholz: How do you find stay-at-home moms?
Mounter: HireMyMom.com is a great resource for stateside help. I also use local networking.
My cousin was my first team member. That was 10 years ago. She had recently had a baby and was looking for a stay-at-home job. I was shipping all of my products from Canada, where I live, but most of my customers were in the U.S. My cousin is in California. So I sent her all my inventory to fulfill for me.
She also handles customer service and social media dialog. My other key team member, another stay-at-home mom, is like a one-woman marketing agency. She does all of our Facebook ads, email marketing, and SMS. I do the planning and the vision. We meet monthly. It’s very hands-off for me. They both do an amazing job.
We have a couple of time-sensitive tasks, such as sending texts to our SMS list when our affiliate brand ambassadors go live on Facebook. We rely on an assistant in the Philippines for that.
Bandholz: Are affiliates your primary marketing tactic?
Mounter: Affiliates are our top traffic source. I love working with them because they’re less stressful and risky than Facebook ads, where we spend money and hope and pray the ads will work.
We recruit affiliates by sending them a free product and then incentivize them with a commission. We’re not out any cash until they make a sale. I would much rather pay female content creators than Mark Zuckerberg!
When I first began looking for affiliates, I typed “DIY broach bouquet” into Google and contacted the creators making those types of videos. You can find people’s contact methods on their YouTube channels. I emailed them and asked if they would promote us and earn commissions on sales. We’ve had a lot of success with that strategy.
Initially the outreach felt daunting. It took a lot of time. But once we had a network of affiliates, the word in the community got out. Other creators saw the videos promoting Totally Dazzled and wanted to do the same. We still do some outreach, but not nearly as much as in the old days.
We use affiliate management software to track performance. We can see how much traffic each drives and whether they are making sales. If they’re doing well, we’ll send them a bunch of free stuff. We keep tabs on our affiliates and spoil the high performers, but we keep the others engaged because they’re still talking about us. Those additional touchpoints eventually drive sales.
We offer a 20% commission and a 30-day cookie window. Both are pretty generous. I believe in generosity because sometimes affiliates are not rewarded for their referrals. Someone might see their content and then go straight to our website.
Success in affiliate marketing requires building long-term partnerships versus being metrics-driven. Many sellers fail with affiliates for that reason.
Plus, audience size is not the only sales predictor. One of our top-performing affiliates generated $30,000 in one day. She has 1.7 million followers on Facebook. Another high performer has about 200,000 followers on Facebook, but her audience is highly aligned with our product.
Bandholz: How do you stay engaged in the business after 12 years?
Mounter: What helps me stay motivated is remembering the why and focusing on gratitude. I’ve been able to raise my kids, never miss their events, and avoid stress when they’re sick. I take time off when I want.
Bandholz: Where can listeners support you or buy your products?