Supersonic planes are inching toward takeoff. That could be a problem.

Boom Supersonic broke the sound barrier in a test flight of its XB-1 jet last week, marking an early step in a potential return for supersonic commercial flight. The small aircraft reached a top speed of Mach 1.122 (roughly 750 miles per hour) in a flight over southern California and exceeded the speed of sound for a few minutes. 

“XB-1’s supersonic flight demonstrates that the technology for passenger supersonic flight has arrived,” said Boom founder and CEO Blake Scholl in a statement after the test flight.

Boom plans to start commercial operation with a scaled-up version of the XB-1, a 65-passenger jet called Overture, before the end of the decade, and it has already sold dozens of planes to customers including United Airlines and American Airlines. But as the company inches toward that goal, experts warn that such efforts will come with a hefty climate price tag. 

Supersonic planes will burn significantly more fuel than current aircraft, resulting in higher emissions of carbon dioxide, which fuels climate change. Supersonic jets also fly higher than current commercial planes do, introducing atmospheric effects that may warm the planet further.

In response to questions from MIT Technology Review, Boom pointed to alternative fuels as a solution, but those remain in limited supply—and they could have limited use in cutting emissions in supersonic aircraft. Aviation is a significant and growing contributor to human-caused climate change, and supersonic technologies could grow the sector’s pollution, rather than make progress toward shrinking it.

XB-1 follows a long history of global supersonic flight. Humans first broke the sound barrier in 1947, when Chuck Yeager hit 700 miles per hour in a research aircraft (the speed of sound at that flight’s altitude is 660 miles per hour). Just over two decades later, in 1969, the first supersonic commercial airliner, the Concorde, took its first flight. That aircraft regularly traveled at supersonic speeds until the last one was decommissioned in 2003.

Among other issues (like the nuisance of sonic booms), one of the major downfalls of the Concorde was its high operating cost, due in part to the huge amounts of fuel it required to reach top speeds. Experts say today’s supersonic jets will face similar challenges. 

Flying close to the speed of sound changes the aerodynamics required of an aircraft, says Raymond Speth, associate director of the MIT Laboratory for Aviation and the Environment. “All the things you have to do to fly at supersonic speed,” he says, “they reduce your efficiency … There’s a reason we have this sweet spot where airplanes fly today, around Mach 0.8 or so.”

Boom estimates that one of its full-sized Overture jets will burn two to three times as much fuel per passenger as a subsonic plane’s first-class cabin. The company chose this comparison because its aircraft is “designed to deliver an enhanced, productive cabin experience,” similar to what’s available in first- and business-class cabins on today’s aircraft. 

That baseline, however, isn’t representative of the average traveler today. Compared to standard economy-class travel, first-class cabins tend to have larger seats with more space between them. Because there are fewer seats, more fuel is required per passenger, and therefore more emissions are produced for each person. 

When passengers crammed into coach are considered in addition to those in first class, each passenger on a Boom Supersonic flight will burn somewhere between five and seven times more fuel per passenger than the average subsonic plane passenger today, according to research from the International Council on Clean Transportation. 

It’s not just carbon dioxide from burning fuel that could add to supersonic planes’ climate impact. All jet engines release other pollutants as well, including nitrogen oxides, black carbon, and sulfur.

The difference is that while commercial planes today top out in the troposphere, supersonic aircraft tend to fly higher in the atmosphere, in the stratosphere. The air is less dense at higher altitudes, creating less drag on the plane and making it easier to reach supersonic speeds.

Flying in the stratosphere, and releasing pollutants there, could increase the climate impacts of supersonic flight, Speth says. For one, nitrogen oxides released in the stratosphere damage the ozone layer through chemical reactions at that altitude.

It’s not all bad news, to be fair. The drier air in the stratosphere means supersonic jets likely won’t produce significant contrails. That could be a benefit for climate, since contrails contribute to aviation’s warming.

Boom has also touted plans to make up for its expected climate impacts by making its aircraft compatible with 100% sustainable aviation fuel (SAF), a category of alternative fuels made from biological sources, waste products, or even captured carbon from the air. “Going faster requires more energy, but it doesn’t need to emit more carbon. Overture is designed to fly on net-zero carbon sustainable aviation fuel (SAF), eliminating up to 100% of carbon emissions,” a Boom spokesperson said via email in response to written questions from MIT Technology Review

However, alternative fuels may not be a saving grace for supersonic flight. Most commercially available SAF today is made with a process that cuts emissions between 50% and 70% compared to fossil fuels. So a supersonic jet running on SAFs may emit less carbon dioxide than one running on fossil fuels, but alternative fuels will likely still come with some level of carbon pollution attached, says Dan Rutherford, senior director of research at the International Council on Clean Transportation. 

“People are pinning a lot of hope on SAFs,” says Rutherford. “But the reality is, today they remain scarce [and] expensive, and they have sustainability concerns of their own.”

Of the 100 billion gallons of jet fuel used last year, only about 0.5% of it was SAF. Companies are building new factories to produce larger volumes of the fuels and expand the available options, but the fuel is likely going to continue to make up a small fraction of the existing fuel supply, Rutherford says. That means supersonic jets will be competing with other, existing planes for the same supply, and aiming to use more of it. 

Boom Supersonic has secured 10 million gallons of SAF annually from Dimensional Energy and Air Company for the duration of the Overture test flight program, according to the company spokesperson’s email. Ultimately, though, if and when Overture reaches commercial operation, it will be the airlines that purchase its planes hunting for a fuel supply—and paying for it. 

There’s also a chance that using SAFs in supersonic jets could come with unintended consequences, as the fuels have a slightly different chemical makeup than fossil fuels. For example, fossil fuels generally contain sulfur, which has a cooling effect, as sulfur aerosols formed from jet engine exhaust help reflect sunlight. (Intentional release of sulfur is one strategy being touted by groups aiming to start geoengineering the atmosphere.) That effect is stronger in the stratosphere, where supersonic jets are likely to fly. SAFs, however, typically have very low sulfur levels, so using the alternative fuels in supersonic jets could potentially result in even more warming overall.

There are other barriers that Boom and others will need to surmount to get a new supersonic jet industry off the ground. Supersonic travel over land is largely banned, because of the noise and potential damage that comes from the shock wave caused by breaking the sound barrier. While some projects, including one at NASA, are working on changes to aircraft that would result in a less disruptive shock wave, these so-called low-boom technologies are far from proven. NASA’s prototype was revealed last year, and the agency is currently conducting tests of the aircraft, with first flight anticipated sometime this year.  

Boom is planning a second supersonic test flight for XB-1, as early as February 10, according to the spokesperson. Once testing in that small aircraft is done, the data will be used to help build Overture, the full-scale plane. The company says it plans to begin production on Overture in its factory in roughly 18 months. 

In the meantime, the world continues to heat up. As MIT’s Speth says, “I feel like it’s not the time for aviation to be coming up with new ways of using even more energy, with where we are in the climate crisis.”

Charts: AI Outlook, Employees vs. Execs, Q1 2025

Employees worldwide are adopting generative AI faster than their leaders expect, according to McKinsey & Company’s January 2025 report, “Superagency in the workplace: Empowering people to unlock AI’s full potential.”

The report examines the survey results of companies’ preparedness for AI adoption. In October and November 2024, McKinsey queried 3,613 employees, managers, and independent contributors and 238 C-level executives.

Eighty-one percent of respondents were from the United States, while the rest represented Australia, India, New Zealand, Singapore, and the United Kingdom. Participants held various roles across business development, finance, marketing, product management, sales, and technology.

According to the report, 62% of millennials aged 35 to 44 reported strong expertise with AI.

Public sector, aerospace/defense, and semiconductor workers are less optimistic about AI’s near-term impact. Only 20% expect significant changes to their work in the next year, contrasting with the media/entertainment and telecom sectors, where about two-thirds anticipate significant AI-driven changes.

Most executives (87%) anticipate generative AI will boost revenue within three years, with half expecting gains above 5%.

LinkedIn Report Reveals 5 Key Trends Reshaping B2B Marketing via @sejournal, @MattGSouthern

A new LinkedIn report shows how businesses are changing their approach to measuring marketing success.

The report, based on insights from leaders at Microsoft, ServiceNow, PwC, and other global firms, identifies five key trends reshaping measurement strategies.

1. Revenue-Centric Metrics

Marketers are now focusing more on revenue-related metrics instead of traditional cost-per-lead measures.

Leaders are adopting tools that sync CRM data with campaign engagement. These tools bridge the gap between marketing activity and business outcomes and show how specific efforts drive deals.

Other critical shifts include:

  • Marketing Qualified Leads (MQLs) are no longer the primary metric because their conversion rates are inconsistent.
  • There is a greater emphasis on “sourced pipeline,” which refers to deals generated by marketing, and “influenced pipeline,” which measures the effect of multiple touchpoints in marketing.

ServiceNow’s Vivek Khandelwal noted:

“You can talk about click-through rate, cost per click, and cost per impression all day long, but what eventually matters to the business are the revenue metrics. It’s all about how many customers we’re winning, how many opportunities we’re creating, and the ROI we’re generating on marketing investments.”

Personio’s Alex Venus emphasized:

“Our North Star metric is qualified pipeline, which means an opportunity that your salespeople care about, which should be converting at a rate of 25% or more.”

2. ROI Frameworks for Brand Marketing

CFOs now need proof that brand-building works financially. This means marketers must show how their awareness efforts lead to sales results.

The report reads:

“The emphasis is shifting from the cost of marketing outcomes to the value of those outcomes. For marketers, that means reporting on KPIs that correlate with revenue in a clear and consistent way – at a rate that both sales and finance can believe in.”

To justify brand spend, teams are:

  • Separating brand and demand budgets to optimize spending.
  • Running campaigns focused on specific high-value accounts, then tracking deal timelines for correlation.
  • Balancing engagement (e.g., branded search growth) with pipeline influence.

3. AI-Powered Attribution Models

B2B buying groups are getting larger, often including 6 to 10 members.

As a result, marketers are now using machine learning models instead of outdated last-touch attribution methods.

Julien Harazi, Head of Lead Generation at Cegid, stated in the report:

“As B2B marketers, our world has become a lot more complicated. All of the touchpoints are intertwined and it can be difficult to understand the buyer journey and identify where the value comes from in terms of your marketing.”

Emerging solutions include:

  • Lifetime value (LTV) analysis by channel/segment
  • Media Mix Modelling to assess cross-channel synergies
  • Integration with LinkedIn Sales Navigator for account-level journey mapping

4. Multi-Timeframe Measurements

Leaders now measure performance across three timelines to balance immediate optimizations with long-term growth:

  1. Real-time: Cost-per-qualified lead optimizations
  2. Mid-term: 3–12-week pipeline ROAS
  3. Long-term: LTV-adjusted ROI incorporating brand investments

This approach helps teams avoid over-indexing on short-term gains while undervaluing brand-building.

Sveta Freidman, Global Data & Analytics Lead at Xero, states in the report:

“One of my goals is to build an understanding of lifetime value by channel, segment level and by platform so that we can optimize our approach around the best outcomes for our business.”

5. Unified Real-Time Dashboards 

With 73% of marketers citing siloed data as a top challenge, integrated analytics tools are becoming critical.

Solutions gaining momentum include:

  • LinkedIn Insight Tag for cross-website behavioral tracking
  • Hybrid metrics balancing brand engagement and demand signals
  • Predictive AI models identifying untracked revenue influences

What This Means For Marketers

The report highlights the value of measurement for brand growth.

These three priorities stand out for B2B marketers:

  1. Link metrics to revenue.
  2. Use tools like multi-touch attribution and brand lift studies to assess demand and brand impact.
  3. Balance real-time optimizations with long-term customer value analysis.

Success in B2B marketing depends on your ability to translate data into language that resonates with CFOs and business leaders.

Download the full report for more details.

Up-To-Date Trends, AI-Driven Workflows, and Smarter Data Strategies for Q2 via @sejournal, @CallRail

In the fast-paced world of PPC advertising, marketers are constantly seeking ways to streamline their workflows and improve performance.

Managing PPC campaigns efficiently requires a delicate balancing act of multiple tasks:

  • Analyzing data.
  • Optimizing bid strategies.
  • Testing creatives.
  • Reporting performance.
  • And so much more.

While AI and machine learning have been around in PPC for years, a new wave of AI tools for streamlining productivity and workflows has made its way into the PPC scene.

Whether it’s automating repetitive tasks, enhancing audience targeting, or analyzing vast datasets, AI tools are reshaping how PPC professionals work.

Who doesn’t want to save time doing repetitive, busy work tasks?

In this article, we’ll explore several unconventional ways AI tools can help PPC marketers save time, increase efficiency, and make smarter decisions.

Using AI To Automate Data Interpretation And Trend Insights

PPC campaigns can generate enormous amounts of data that need to be consistently analyzed and interpreted.

AI tools outside of the standard Google and Microsoft Ads platforms can help streamline this process by helping with tasks like:

  • Quickly summarizing key trends.
  • Look for patterns in performance data.
  • Identify any data anomalies for further analysis.

These insights can enable marketers to move from data to action faster.

Using AI Tools For Trend Identification And Insights

If you’d rather not manually sift through reports identifying changes in performance metrics changes, you can actually feed campaign data into ChatGPT (or similar AI tools) to receive summaries that highlight performance trends.

For example, they can help identify seasonal changes in performance or pinpoint potential issues, such as a sudden dip in conversion rate.

Say you run 20 different campaigns in Google Ads and start to see a significant drop in conversion rates from the platform. It can be daunting to immediately pinpoint the cause of the issue.

By processing raw performance data from your campaigns, these AI tools can quickly analyze the data and provide insight into not only where the problem(s) can lie, but also glean insights as to why performance has shifted, like:

  • Ad fatigue.
  • Increased competition.
  • A shift in consumer behavior.

Using AI tools in this capacity helps marketers cut down on analysis time while helping to identify core issues faster, allowing for quicker optimization.

This automation saves hours of manual work, enabling you to focus on more strategic decision-making instead of spending time analyzing large datasets.

Enhancing Competitor Analysis And Strategy Development

Keeping up with competitors is crucial in the PPC landscape, but the task at hand can be time-consuming and complex.

AI tools simplify this process by providing insights into competitors’ strategies, allowing you to stay one step ahead.

There are plenty of tools to help drive competitor insights, whether in the Google Ads platform, third-party tools, or AI tools.

If you’re looking to take the analysis a step further, you can input reports from other competitive analysis tools into ChatGPT (or a similar tool) to receive a quick summary that highlights a competitor’s recent actions.

For example, this could include information like:

  • Shifts in bidding strategies.
  • Introduction of new ad copies.
  • Keywords being targeted.

Based on this data, the AI tools can suggest ways to adjust your own campaigns or suggest counter-strategies to stay competitive.

By automating competitor analysis tasks, you can gain valuable insights faster, which allows for quicker, more informed decision-making and strategic actions.

Simplifying Multi-Account And Cross-Platform Reporting

Managing campaigns across multiple platforms – whether it’s Google Ads, Microsoft Ads, Meta, or others – means compiling huge data sets from different sources.

Trying to put together a compelling, holistic story about your marketing campaigns can take up a lot of time as you navigate from platform to platform.

This is where the power of AI tools can come in to help aggregate reports and create cohesive summaries.

Streamlining Cross-Platform Reporting

Multi-channel reporting is often a daunting task, especially when managing accounts across Google, Microsoft, and social platforms.

By inputting performance data from these platforms into ChatGPT, marketers can receive a single, unified report that summarizes key performance indicators (KPIs) across channels.

For example, say you manage several campaigns across Google Ads, Microsoft Ads, and Meta Ads.

Instead of switching between dashboards and manually pulling data, you can input the performance metrics from each platform into your AI tool of choice.

The tool can summarize the top-performing platforms, highlight underperforming campaigns, and suggest where to reallocate budgets to maximize ROI.

AI’s ability to consolidate multi-channel data helps reduce reporting time, enabling marketers to spend more time optimizing campaigns and less time on administrative tasks.

Keyword Research And Expansion With AI

Keyword research is at the core of every PPC strategy, and expanding keyword lists can be labor-intensive.

AI tools can make the process more efficient by identifying relevant keywords, negative keywords, and keyword variations that are often missed in traditional tools.

While tools like the Google Keyword Planner are great at providing keyword recommendations, AI tools can take it a step further.

They can generate items like long-tail keyword variations and help identify opportunities for new targeting strategies.

Additionally, they can analyze an existing keyword list and suggest related keywords that reflect user intent or emerging trends.

For example, say you manage PPC campaigns for an ecommerce retailer. You input a list of current top-performing keywords with your latest KPI performance data into your AI tool of choice.

From there, the tool can generate suggestions for new long-tail keywords that may have lower volume, but higher intent to purchase.

Additionally, you can ask the tool to suggest negative keywords to eliminate irrelevant traffic, which improves both relevance and cost efficiency.

To really kick this into high gear, you can then ask the tool to format these new keywords and negative keywords into a format that allows you to upload them into Google Ads Editor, saving you hours of manual work adding each one individually.

Using AI tools beyond the ad platforms can help marketers discover new opportunities faster, ensuring more comprehensive targeting with minimal manual effort.

AI-Assisted Testing And Creative Optimization

There’s no debate that A/B testing is critical to campaign optimization, but interpreting results and making decisions about the next steps is where most people fall flat.

Using AI tools to streamline this process can aid you in analyzing test data and suggest optimizations based on performance.

Say you want to test two different versions of a headline in a PPC campaign. You can upload your test performance data into an AI tool for analysis.

Not only will it summarize which headline performed better, but it goes a step further to help answer why one headline outperformed the other.

By providing insights into which elements contributed to success, it can save you time in the long run and help keep those driving factors top of mind for the next test.

AI For PPC Budget Allocation And Forecasting

Effective budget management is essential for optimizing PPC performance.

The ad platforms are great at automating tasks like changing daily budgets based on scripts, but what about strategic budget allocation decisions?

Using AI tools to assist budget allocation across campaigns or platforms by forecasting potential outcomes based on past performance data can streamline the process of deciding where to invest – and when.

For example, a retail client has an upcoming holiday sale and they want to know if they can expect a higher return than last year’s sale.

Inputting last year’s campaign performance into AI tools like ChatGPT can help analyze performance, while also taking into consideration current market trends.

The output could be to suggest how much of the budget should be allocated to high-performing keywords or certain product categories.

It can also provide a forecast of expected returns based on historical data, current CPC trends, and consumer behavior trends to help you make informed budget decisions ahead of time.

AI-driven budget forecasting helps ensure that resources are allocated to the right areas, reducing wasted spend and improving overall campaign performance.

Automating Market Trend Exploration And Forecasting

Market trends can shift quickly, and staying ahead of these changes is key to successful PPC campaigns.

AI tools can analyze search trends, consumer behavior, and historical campaign data to predict future shifts in demand and help marketers prepare.

For instance, AI tools can identify trends in consumer searches in real time, helping you adjust your campaign strategies proactively.

For example, you manage Google Ads campaigns for a fitness brand, and you’re noticing a seasonal uptick in searches for [home workout equipment].

By using AI tools to analyze Google Trends data, you can forecast how that demand will continue to rise or fall in the coming months, and even if certain geographical areas are driving the high demand.

This allows you to adjust bids based on location, increase overall budgets if necessary to help capture demand, and create relevant ad copy that speaks directly to the emerging trend.

Conclusion

AI is revolutionizing PPC workflows, allowing marketers to work smarter, not harder.

Whether you’re leveraging Google Ads’ AI capabilities, like Gemini’s conversational ad creation or integrating third-party tools for deeper insights, AI is becoming indispensable in managing and optimizing PPC campaigns.

From automating bid management and audience targeting to optimizing ad creatives and providing actionable insights, AI offers opportunities to boost efficiency without sacrificing effectiveness.

As AI tools continue to evolve, those who embrace these technologies will find themselves better equipped to deliver superior results, whether managing in-house campaigns or serving clients.

By integrating both Google’s AI features and powerful third-party tools, you can unlock new levels of performance, save time on manual tasks, and focus on strategy and innovation.

More resources:


Featured Image: 3rdtimeluckystudio/Shutterstock

SEO vs. Pay-per-click advertising: Which one should you choose?

SEO and PPC are two of the most important strategies for increasing your website’s visibility. While they both aim to attract more traffic, they operate differently. They also serve different purposes. Here, we’ll discuss SEO vs. Pay-per-click advertising and how to choose the best option for you.

Table of contents

Understanding SEO and PPC

As we all know, SEO stands for Search Engine Optimization. It consists of everything you do to get your site higher rankings in the original search results. Those tactics are thoroughly researching which keywords to target, writing high-quality content, and making sure that your site is structurally and technically sound. The goal is to get the organic traffic you want by making your site relevant and authoritative.

Pay-per-click (PPC), on the other hand, is all about paying for ads — the sponsored listings — that appear at the top of search results. So, every time someone clicks your ad, it costs you money. As it lets you target advertising based on user demographics, this model can lead to immediate results.

An example of PPC ads vs organic results for a search term in Google

What’s the difference between SEO and PPC?

SEO and pay-per-click advertising are both popular options to get traffic to your site. However, both options have their advantages to help you reach those goals.

Cost structure

For SEO, the costs mostly lie in the initial work and ongoing maintenance. You have to invest in creating high-quality content, optimizing your site, and reaching out to build good links and relationships. With SEO, there are no direct costs per click, but it does require consistent effort and resources to get results.

With PPC, you pay every time someone clicks your sponsored listing. To make it manageable, you set a budget; when this budget runs out, your ads will no longer be visible. PPC gives you control over budget, but costs can quickly ramp up — especially in high-demand markets or for competitive keywords. 

Time to results

We always say that SEO is a marathon and not a sprint. Building authority takes time, so it can take months to see rankings go up. But the wait is worth it, as it leads to better and more stable results in the long run.

PPC is more direct and to the point. Launch a campaign, and the visitors should come in straight away. As such, this is a great tool for time-sensitive stuff like promotions and launches or when you need instant visibility and reach. 

Sustainability and impact

SEO is the more sustainable option. With your initial work done, you can reap the rewards for a long time. Of course, there’s always more to do with your SEO tasks, but that’s normal. Building a brand is something that will pay off big time. With PPC, you get an incredible boost for a short period — the time you pay for the sponsored listings.

Targeting capabilities

SEO targets users based on content and keywords. You can target your content on different search intents, but the options are not as direct as with PPC. This offers more precise options, allowing you to publish ads to specific demographics, locations, times, and user behavior. 

Flexibility and control

With SEO, you do put yourself in the hands of search engine algorithms. Algorithm updates could harm your rankings. As a result, you should reevaluate your strategy. You have control over everything on your site, but not search engines. PPC, though, does give full control over your ads. It makes it easier to adapt to changes and needs.

Measurement and analytics

It’s important to measure your success. For SEO, you are looking at a longer period and need to keep track of traffic and keyword rankings. It can be difficult to get usable insights from data. With PPC, you get detailed insights that show you how your campaigns are doing. You’ll also get the tools to adjust instantly. 


SEO and PPC, while different channels that require different skills and have different goals, can really complement each other in the long term. To me, PPC is considered more of a science than the art of SEO. The great thing about PPC for SEOs is that it not only attracts quicker returns (that can also be calculated with more precision) but also provides the same accurate and actionable data for SEOs. I have always found data from PPC extremely useful in directing an SEO strategy.

Alex Moss – Principal SEO expert at Yoast


Pros and cons of SEO

Both SEO and PPC have their pros and cons. Let’s go over these.

Pros of SEO

SEO is cost-effective in the long run. Once you have a strategy and an optimized site, it can continue attracting traffic without additional costs, leading to a sustainable traffic source. 

Ranking well gives your site a sense of trust and credibility, as people trust sponsored listings less than organic search results. High rankings can boost your brand. Of course, higher rankings lead to a high CTR, and many users simply skip ads because they don’t like them. 

As SEO improves the general user experience of the website, it will become a better investment for your money overall. Investing in SEO can lead to higher engagement and conversion rates.

Cons of SEO

Of course, SEO isn’t the end-all solution to everything. For one, building up authority and higher rankings takes a lot of time. It’s not the solution if you want quick results. You must also work on your strategy, content, and site quality. The more work you put in, the better your results can be. And as search engines keep evolving, you must evolve as well. 

SEO operates in a highly competitive landscape. For some markets, it’s almost impossible to break into the top ten of the results. Plus, it might take a ton of money to do that. And that’s another con for SEO: the results are uncertain due to algorithm changes, competition, and market conditions.  

Pros and cons of PPC

Pay-per-click advertising also has its own good points and bad points, as you’ll read below:

Pros of PPC

The biggest benefit of PPC is getting immediate results for your money. You can set up campaigns quickly and get results going without much hassle. You also have full control over the budget, so you only pay for what you want to pay for. 

PPC is also flexible and precise. You have much control over who you target and when, leading to more precise results. And if your strategy needs adjustments, you can update your sponsored listings quickly. Pay-per-click ad systems give you all the data you need to make the proper decisions. 

Cons of PPC

One of the main drawbacks of pay-per-click is that costs could rise quickly. Another main drawback is that you’ll only get results as long as you pay — no money, no results. This makes PPC a viable option only for specific campaigns.

How well ads perform also depends on how users perceive them — ad fatigue is a thing. You must experiment with placements and forms to see what works best. For this, you should adhere to the rules of the platforms on which you’re running your ads.

Conclusion SEO vs Pay-per-click

Whether you choose between SEO and PPC depends on your needs, strategy, and timeline. SEO is amazing for long-term results, while PPC can quickly produce results. Most businesses will probably use a combination of both. You can use the strength of both strategic tools in your toolset to get the results your business is looking for.

Coming up next!

Where Are The Missing Data Holes In GA4 That Brands Need? via @sejournal, @gregjarboe

As SEO professionals, we’re data-driven. So, it’s ironic that we need to ask a counterintuitive question: “Where are the missing bullet holes in Google Analytics 4 (GA4)?”

Most of us trust the event-based data that GA4 collects. But we should use other tools and techniques to independently verify our analysis and interpretation of this data.

Why?

I just looked at data in the GA4 demo account of the Google Merchandise Store, and 46,811 of the 68,976 total users over the last 28 days were acquired from the direct channel.

This means 67.9% of users arrived at the site “via a saved link or by entering your URL.”

Screenshot from Google Analytics, January 2025

If you think the Google Merchandise Store’s data is an anomaly because it’s from the GA4 demo account, then check your own data.

I did, and 57.6% of my total users arrived through the direct channel. So, your mileage may vary, but there are probably more users than you can shake a stick at.

More importantly, the Google Merchandise Store’s business goal is to sell a variety of Google merchandise, including apparel, accessories, lifestyle products, stationery, and collectibles.

How would you analyze and interpret GA4’s data to determine which marketing efforts were effective?

You could use GA4 to understand how users progress through the online shopping cart. If you notice that users have trouble with a particular step, then you could use conversion rate optimization (CRO) to make changes on the store’s website to resolve the problem.

You would analyze and interpret customer engagement data from the middle and lower parts of the so-called sales funnel.

If I were the owner of a brick-and-mortar store, I’d realize that I’m focusing all my attention on which aisles people walk down and which items they bring to the cash register.

But I still don’t have a clue where they heard about my shop before they walked through the door.

In other words, GA4 gives us less than a third of the data we need to know about user acquisition: The initial stage of building business awareness and acquiring user interest.

Somehow, we’ve missed what GA4 can’t – or doesn’t – tell us about the Zero Moment of Truth (ZMOT): the moment in the purchase process when the consumer or business buyer researches a product or service prior to visiting your website.

The Missing Bullet Holes

Why haven’t we spotted this misalignment before? Well, let me share a story.

My father was a sergeant in the United States Army Air Corps (USAAC) during World War II.

When I started conducting market research in the mid-1980s – when he was the director of marketing at Oldsmobile, and I was the director of corporate communications at Lotus Development Corporation – he told me a story that has since been retold in “Abraham Wald and the Missing Bullet Holes,” which is an excerpt from  How Not To Be Wrong by Jordan Ellenberg.

During World War II, officers in the USAAC asked Abraham Wald, one of the smartest statisticians in the Statistical Research Group (SRG), to analyze some classified data.

When American bombers came back from missions over Europe, they were covered in bullet holes.

“But the damage wasn’t uniformly distributed across the aircraft,” Ellenberg notes. “There were more bullet holes in the fuselage, not so many in the engines.”

Wald recognized that the planes that came back were not a random sample of all the planes that had been sent on bombing missions, and he also realized the damage should have been spread equally among all the bombers.

So, he asked, “Where are the missing holes?” Ellenberg explains, “The reason planes were coming back with fewer hits to the engine is that planes that got hit in the engine weren’t coming back.”

The Missing Holes In User Acquisition

Digital marketers are in an analogous situation. GA4 provides us with so much event-based data that we’ve failed to spot the missing holes in user acquisition.

So, now that we realize that we don’t have a clue about where the lion’s share of our audience discovered our brand or product before visiting our website, what should we do?

We should conduct some audience research that can tell us:

  • Who are they? (Demographics: age, gender, location, job, and income).
  • What do they do? (Behavior: how they shop, what they search for online).
  • Where do they hang out? (Platforms: social media, websites, communities).
  • What matters to them? (Needs and Interests: their problems, desires, and what they talk about).

Are there any audience research tools that can help us? Yes, they include:

  • SparkToro or Audiense: For demographic and platform data.
  • Brandwatch, HootSuite, or Sprout Social: For social listening.
  • Ahrefs, Moz, or SpyFu: For keyword research.
  • Google Trends or Exploding Topics: For detecting internet search trends.

How Do You Spot The Missing Holes?

If you’re in the initial stage of building business awareness and acquiring user interest in other countries, then how do you spot the missing holes?

For over 10 years, I used the now sunset Google Surveys to answer questions like that. You can still use Google Forms or SurveyMonkey.

I asked survey expert and CEO of Growth Survey Systems Nathaniel Laban if he would provide a sample question for such a survey, and here’s what he sent me via email:

For a consumer or B2B study, it might look like this:

1. Where do you get news and information about (brand/product)? (Select all that apply. Multiple response.)

  • From friends, family, and colleagues.
  • From an expert or enthusiast who demonstrably knows the topic well.
  • Organic search.
  • Blogs, news sites.
  • Paid search.
  • Email.
  • Organic social.
  • Organic shopping.
  • Organic video.
  • Other (Specify):

Laban added:

“Communications and marketing channels should always be investigated for one’s target audience.

You need to meet people where they are today to be successful in communications and marketing campaigns. Test your assumptions about where your audience is and back it up with statistically representative data.

Trust your math, not your gut!”

What Can You Expect To Discover?

Now that you know how to spot the holes at the top of the funnel where GA4 can’t – or doesn’t – tell us what we need to know about ZMOT, what can you expect to discover?

GA4 provides a way to measure engaged-view key events, which indicate that someone watched a YouTube video for at least 10 seconds and then triggered a key event on your website or app within three days of viewing the video.

Engaged-view key events are a more accurate way to measure the performance of your video ads. They recognize the fact that users often don’t act immediately after seeing an ad, but rather after they’ve finished watching a video.

This also explains why 70% of YouTube viewers say they’ve purchased a brand after discovering it on the platform. It indicates that YouTube is a highly effective medium for brand discovery and purchase intent.

But to measure engaged-view key events, you need to link your Google Ads account to allow data to flow between Ads and GA4.

Unfortunately, there isn’t a similar way to measure engaged-view key events for other default channels in GA4 like organic video (e.g., YouTube or TikTok), organic social (e.g., Facebook or LinkedIn), or referral (e.g., blogs or news sites).

Users actively seek information about products and services by watching or reading this content, often leading to buying decisions based on what they’ve seen and learned in these channels.

But if you conduct an information sources survey and invest in the right channels and sources of influence, then you shouldn’t be shocked to find that you tend to generate more traffic, leads, and sales, too.

Conducting Brand Lift Surveys

What if your company or clients are in the automotive, consumer packaged goods, or retail industries, and your business objective is to raise brand awareness? How do you measure that?

As I mentioned previously, you can conduct Brand Lift surveys.

Either periodically or before and after major campaigns, you can survey your audience and ask:

  • Standard Brand Awareness: Have you heard of (brand/product/message)?
  • Unaided Brand Awareness: Which of the following (brand/product category) have you heard of? (Tick all that apply.)
  • Top-of-Mind Awareness: Which of the following (brand/products) comes to mind first when you think of (statement)?
  • Standard Favorability: What’s your opinion of (brand/product)?
  • Familiarity: How familiar are you with (brand/product name)?
  • Intent: Will you buy (brand) the next time you shop for (category)?
  • Action Intent: How likely are you to purchase (brand)?
  • Recommendation: Will you recommend (brand/product) to a friend?
  • Consideration: How likely are you to (consider) (brand/product) the next time you want to (shop for) (category)?
  • Preference: Among the following (brands), which do you prefer most?

In other words, old-school market research can measure Brand Lift, which GA4’s event-based data can’t – even if it’s supplemented with audience research data.

The Lesson We Can Learn From The Missing Bullet Holes

Digital marketers who don’t conduct market research may know what users do when they reach the middle and lower funnel, but they haven’t a clue about why users in the upper funnel aren’t aware of their brand yet or where they can reach them.

That’s the lesson we can learn from “Abraham Wald and the Missing Bullet Holes.” It’s a lesson that my father learned more than 80 years ago, and he shared it with me about 40 years later. Now, I’m sharing it with you.

In short, trust GA4’s data, but verify your analysis and interpretation of it.

More Resources:


Featured Image: alphaspirit.it/Shutterstock

Mastering SERP Analysis: A Step-By-Step Guide To Understanding Search Engine Results Pages via @sejournal, @AdamHeitzman

Understanding search engine results pages (SERPs) is critical for anyone serious about increasing their website’s visibility.

Search engines use SERPs to display results for user queries, and the primary goal for SERP analysis is understanding why certain pages earn top rankings and what elements contribute to their success.

Analyzing these pages can unlock valuable insights into ranking factors, search intent, and what content types perform best.

Conducting SERP analysis helps you develop content strategies that align with search engine preferences and user expectations.

In this comprehensive guide, we’ll explain the fundamentals of SERP analysis, why it matters, and how you can master it to improve your SEO strategy.

Understanding SERP Features

Today’s search results pages are more complex, featuring many elements beyond the traditional organic blue links. Here are the key SERP features you need to know:

Featured Snippets

Position zero results that provide immediate answers to queries, typically in the form of paragraphs, lists, or tables.

These snippets are extracted directly from top-ranking pages and appear above organic results.

Screenshot from search for [How does photosynthesis work in desert plants], Google, January 2025Screenshot from search for [How does photosynthesis work in desert plants], Google, January 2025

AI Overview/Search Generative Experience (SGE)

Google’s AI-generated summaries synthesize information from multiple sources to provide comprehensive answers.

These appear at the top of results and often include citation links to source material.

Screenshot from search for [ai overviews], Google, January 2025Screenshot from search for [ai overviews], Google, January 2025

Rich Snippets

Enhanced search listings that display additional information through structured data, such as:

  • Star ratings.
  • Product prices.
  • Recipe details.
  • Event information.
  • Review counts.
  • Author information.
Screenshot from search for [chocolate chip cookie recipe], Google, January 2025Screenshot from search for [chocolate chip cookie recipe], Google, January 2025

Knowledge Panels

These are information boxes appearing on the right side of desktop searches, displaying key facts about entities like:

  • Businesses.
  • People.
  • Places.
  • Organizations.
  • Products.
Screenshot from search for [HigherVisibility], Google, January 2025Screenshot from search for [HigherVisibility], Google, January 2025

People Also Ask (PAA) Boxes

Expandable sections showing related questions and answers, helping users explore topics in greater depth.

Screenshot from search for [how do solar panels work], Google, January 2025

Local Packs

Groups of three local business listings with maps, particularly prominent for location-based queries.

Screenshot from search for [pizza near me], Google, January 2025Screenshot from search for [pizza near me], Google, January 2025

Shopping/Product Features

  • Product Carousels: Horizontal scrolling product listings with images and prices.
  • Shopping Knowledge Panels: Detailed product information with purchasing options.
  • Merchant Listings: Comparison shopping results from multiple retailers.
Screenshot from search for [wireless headphones], Google, January 2025Screenshot from search for [wireless headphones], Google, January 2025

Visual Features

  • Image Packs: Grid layouts of relevant images.
  • Video Carousels: Scrollable video results, often from YouTube.
  • Visual Stories: Web stories in a mobile-friendly format.

News And Editorial Features

  • Top Stories Boxes: Recent news articles.
  • Publisher Carousel: News from specific publications.
  • Perspectives Carousel: Opinion pieces and editorials.
Screenshot from Google News, January 2025

Why Does SERP Analysis Matter?

SERP analysis is a cornerstone of any SEO strategy because it provides actionable insights about your competition, audience preferences, and search engine ranking factors.

Here’s why it’s so important:

1. Understanding Search Intent

Search intent is the motivation behind a user’s query.

For example, a user might want to learn how to complete a specific task, compare different products or services, or make a purchase.

Analyzing the top-ranking pages for a keyword is the best way to infer the search intent behind that term. This is because search engine algorithms are fine-tuned to surface content that best matches what users expect to see.

So, if most of the results for a given keyword are tutorial-based articles, it’s safe to assume that users searching for that keyword are looking for step-by-step instructions or educational content.

Meanwhile, if the results consist primarily of product pages or reviews, the intent is likely transactional, with users looking to make a purchase or compare options before buying.

Further reading: How People Search: Understanding User Intent

2. Uncovering Competitor Strategies

Studying top results helps you identify what your competitors are doing right.

This includes the depth and structure of their content, their use of multimedia formats like videos or infographics, keyword optimization tactics, and the strength of their backlink profiles.

By closely examining these factors, you can uncover patterns in the strategies across competitors that drive their success.

What’s more, SERP analysis helps you pinpoint gaps in your competitors’ strategies – such as overlooked topics, under-optimized keywords, or weak content in high-ranking positions – giving you opportunities to create more comprehensive, engaging, and authoritative content that outperforms them.

Further reading: SEO Competitive Analysis: The Definitive Guide

3. Identifying Keyword Opportunities

Not all keywords are equally competitive.

SERP analysis can help you find low-hanging fruit – keywords with manageable competition that still attract significant search volume.

By identifying these overlooked or underserved keywords, you can create targeted content to capture untapped traffic and build authority.

These opportunities are especially valuable for smaller websites or those just beginning to build domain authority.

They allow you to focus your efforts on achievable wins while steadily growing your traffic and credibility.

Further reading: Keyword Research: An In-Depth Beginner’s Guide

4. Optimizing For SERP Features

Appearing in SERP features (as we discussed earlier) can significantly increase your visibility and click-through rates.

Because even if you don’t achieve the highest rankings, your site can still claim some valuable SERP real estate and capture user attention.

SERP analysis helps you identify which features appear for your target keywords and what type of content Google pulls into them.

For example, featured snippets often prioritize concise, well-structured answers, while PAA boxes highlight responses to commonly searched follow-up questions.

By tailoring your content to match the requirements of these features – whether it’s using clear formatting, answering common questions, or implementing structured data – you can boost your chances of appearing in these prominent positions, ultimately driving more traffic to your site.

How To Conduct SERP Analysis In 4 Steps

1. Identify Your Target Keywords

Start by choosing the keywords you want to target.

The goal here isn’t just to pick any search terms that are relevant to your business.

Remember, not all keywords offer the same value – some are highly competitive, while others may not attract enough search traffic to be worthwhile.

Instead, focus on keywords that are:

Aligned With Your Audience’s Interests

Look for terms that reflect the type of content your target audience will likely find valuable, whether it’s solutions to their problems, product recommendations, or in-depth information on a particular topic.

Promote Your Business Goals

Focus on terms that match your immediate business objectives, such as building brand awareness, generating leads, or directing traffic to specific product pages.

Not Too Competitive

Avoid going after highly competitive keywords dominated by well-established brands unless you have the resources to compete.

Instead, look for long-tail keywords or niche terms that give you a better chance at standing out.

Attract Search Volume

As a rule, keywords with high search volumes tend to be the hardest to rank for.

That said, you don’t need to aim for the highest-volume keywords to see results.

Instead, focus on keywords with moderate search volume that are still relevant to your audience and achievable for your domain authority.

2. Analyze The SERP Landscape

When examining search results, consider:

Desktop Vs. Mobile Differences:

  • Feature placement variations.
  • Mobile-specific elements like scrolling carousels.
  • Different click behaviors and user patterns.

Location And Personalization Impact:

  • How results vary by geographic location.
  • Personalized elements based on search history.
  • Language and regional preferences.

SERP Feature Opportunities:

  • Which features appear for your target keywords.
  • Requirements for earning specific SERP features.
  • Competition level for each feature type.

3. Evaluate Top-Ranking Pages

Next, you’ll need to examine the top-performing content in a little more depth.

The goal is to figure out what makes these pages rank so highly so you can reverse-engineer their success and apply similar strategies to your own content.

Here are some things to consider:

  • Content Quality: Evaluate the depth, relevance, and clarity of the content. Is it comprehensive, engaging, and well-structured? Does it fully address user intent, or are there areas where it falls short?
  • SEO Best Practices: Check title tags, meta descriptions, and header structures. Pay attention to how keywords are incorporated naturally throughout the page.
  • Multimedia Usage: Notice if the pages include videos, images, charts, or infographics. These elements enhance the user experience and often signal higher-quality content to search engines.

So, if you find that the top pages for your keyword average 2,000+ words, cover multiple subtopics, and include custom visuals and quotes from industry experts, creating a 500-word blog post probably won’t cut it.

To compete, you’ll need to create a more detailed, engaging resource that provides value users can’t get elsewhere.

This leads us to the final step.

4. Look For Content Gaps And Opportunities

Here, the goal is to find opportunities to differentiate yourself by looking at where existing top-ranking content falls short.

Ask yourself:

  • Are there questions users might have that the current results don’t fully answer?
  • Could you provide more up-to-date statistics, original research, or unique case studies?
  • Are there related keywords or subtopics that competitors overlook?

For example, if top-ranking pages lack practical examples, recent data, exclusive quotes from industry leaders, or high-quality visuals, incorporating these elements will help give you an edge over your competitors.

This step is all about going above and beyond the quality of existing content. By filling these gaps, you’ll provide a more valuable reading experience for users.

Final Thoughts

SERP analysis has evolved beyond simply studying organic rankings. Success requires understanding the full spectrum of SERP features and how they interact with user intent and behavior patterns.

By implementing the strategies outlined in this guide and staying current with new SERP features as they emerge, you’ll be better positioned to capture valuable SERP real estate and drive meaningful traffic to your site.

Remember to regularly review and update your SERP analysis approach as search engines continue to evolve and introduce new features that can impact your visibility and performance.

More Resources:


Featured Image: Gorodenkoff/Shutterstock

Three things to know as the dust settles from DeepSeek

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The launch of a single new AI model does not normally cause much of a stir outside tech circles, nor does it typically spook investors enough to wipe out $1 trillion in the stock market. Now, a couple of weeks since DeepSeek’s big moment, the dust has settled a bit. The news cycle has moved on to calmer things, like the dismantling of long-standing US federal programs, the purging of research and data sets to comply with recent executive orders, and the possible fallouts from President Trump’s new tariffs on Canada, Mexico, and China.

Within AI, though, what impact is DeepSeek likely to have in the longer term? Here are three seeds DeepSeek has planted that will grow even as the initial hype fades.

First, it’s forcing a debate about how much energy AI models should be allowed to use up in pursuit of better answers. 

You may have heard (including from me) that DeepSeek is energy efficient. That’s true for its training phase, but for inference, which is when you actually ask the model something and it produces an answer, it’s complicated. It uses a chain-of-thought technique, which breaks down complex questions–-like whether it’s ever okay to lie to protect someone’s feelings—into chunks, and then logically answers each one. The method allows models like DeepSeek to do better at math, logic, coding, and more. 

The problem, at least to some, is that this way of “thinking” uses up a lot more electricity than the AI we’ve been used to. Though AI is responsible for a small slice of total global emissions right now, there is increasing political support to radically increase the amount of energy going toward AI. Whether or not the energy intensity of chain-of-thought models is worth it, of course, depends on what we’re using the AI for. Scientific research to cure the world’s worst diseases seems worthy. Generating AI slop? Less so. 

Some experts worry that the impressiveness of DeepSeek will lead companies to incorporate it into lots of apps and devices, and that users will ping it for scenarios that don’t call for it. (Asking DeepSeek to explain Einstein’s theory of relativity is a waste, for example, since it doesn’t require logical reasoning steps, and any typical AI chat model can do it with less time and energy.) Read more from me here

Second, DeepSeek made some creative advancements in how it trains, and other companies are likely to follow its lead. 

Advanced AI models don’t just learn on lots of text, images, and video. They rely heavily on humans to clean that data, annotate it, and help the AI pick better responses, often for paltry wages. 

One way human workers are involved is through a technique called reinforcement learning with human feedback. The model generates an answer, human evaluators score that answer, and those scores are used to improve the model. OpenAI pioneered this technique, though it’s now used widely by the industry. 

As my colleague Will Douglas Heaven reports, DeepSeek did something different: It figured out a way to automate this process of scoring and reinforcement learning. “Skipping or cutting down on human feedback—that’s a big thing,” Itamar Friedman, a former research director at Alibaba and now cofounder and CEO of Qodo, an AI coding startup based in Israel, told him. “You’re almost completely training models without humans needing to do the labor.” 

It works particularly well for subjects like math and coding, but not so well for others, so workers are still relied upon. Still, DeepSeek then went one step further and used techniques reminiscent of how Google DeepMind trained its AI model back in 2016 to excel at the game Go, essentially having it map out possible moves and evaluate their outcomes. These steps forward, especially since they are outlined broadly in DeepSeek’s open-source documentation, are sure to be followed by other companies. Read more from Will Douglas Heaven here

Third, its success will fuel a key debate: Can you push for AI research to be open for all to see and push for US competitiveness against China at the same time?

Long before DeepSeek released its model for free, certain AI companies were arguing that the industry needs to be an open book. If researchers subscribed to certain open-source principles and showed their work, they argued, the global race to develop superintelligent AI could be treated like a scientific effort for public good, and the power of any one actor would be checked by other participants.

It’s a nice idea. Meta has largely spoken in support of that vision, and venture capitalist Marc Andreessen has said that open-source approaches can be more effective at keeping AI safe than government regulation. OpenAI has been on the opposite side of that argument, keeping its models closed off on the grounds that it can help keep them out of the hands of bad actors. 

DeepSeek has made those narratives a bit messier. “We have been on the wrong side of history here and need to figure out a different open-source strategy,” OpenAI’s Sam Altman said in a Reddit AMA on Friday, which is surprising given OpenAI’s past stance. Others, including President Trump, doubled down on the need to make the US more competitive on AI, seeing DeepSeek’s success as a wake-up call. Dario Amodei, a founder of Anthropic, said it’s a reminder that the US needs to tightly control which types of advanced chips make their way to China in the coming years, and some lawmakers are pushing the same point. 

The coming months, and future launches from DeepSeek and others, will stress-test every single one of these arguments. 


Now read the rest of The Algorithm

Deeper Learning

OpenAI launches a research tool

On Sunday, OpenAI launched a tool called Deep Research. You can give it a complex question to look into, and it will spend up to 30 minutes reading sources, compiling information, and writing a report for you. It’s brand new, and we haven’t tested the quality of its outputs yet. Since its computations take so much time (and therefore energy), right now it’s only available to users with OpenAI’s paid Pro tier ($200 per month) and limits the number of queries they can make per month. 

Why it matters: AI companies have been competing to build useful “agents” that can do things on your behalf. On January 23, OpenAI launched an agent called Operator that could use your computer for you to do things like book restaurants or check out flight options. The new research tool signals that OpenAI is not just trying to make these mundane online tasks slightly easier; it wants to position AI as able to handle  professional research tasks. It claims that Deep Research “accomplishes in tens of minutes what would take a human many hours.” Time will tell if users will find it worth the high costs and the risk of including wrong information. Read more from Rhiannon Williams

Bits and Bytes

Déjà vu: Elon Musk takes his Twitter takeover tactics to Washington

Federal agencies have offered exits to millions of employees and tested the prowess of engineers—just like when Elon Musk bought Twitter. The similarities have been uncanny. (The New York Times)

AI’s use in art and movies gets a boost from the Copyright Office

The US Copyright Office finds that art produced with the help of AI should be eligible for copyright protection under existing law in most cases, but wholly AI-generated works probably are not. What will that mean? (The Washington Post)

OpenAI releases its new o3-mini reasoning model for free

OpenAI just released a reasoning model that’s faster, cheaper, and more accurate than its predecessor. (MIT Technology Review)

Anthropic has a new way to protect large language models against jailbreaks

This line of defense could be the strongest yet. But no shield is perfect. (MIT Technology Review). 

How the Rubin Observatory will help us understand dark matter and dark energy

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

We can put a good figure on how much we know about the universe: 5%. That’s how much of what’s floating about in the cosmos is ordinary matter—planets and stars and galaxies and the dust and gas between them. The other 95% is dark matter and dark energy, two mysterious entities aptly named for our inability to shed light on their true nature. 

Cosmologists have cast dark matter as the hidden glue binding galaxies together. Dark energy plays an opposite role, ripping the fabric of space apart. Neither emits, absorbs, or reflects light, rendering them effectively invisible. So rather than directly observing either of them, astronomers must carefully trace the imprint they leave behind. 

Previous work has begun pulling apart these dueling forces, but dark matter and dark energy remain shrouded in a blanket of questions—critically, what exactly are they?

Enter the Vera C. Rubin Observatory, one of our 10 breakthrough technologies for 2025. Boasting the largest digital camera ever created, Rubin is expected to study the cosmos in the highest resolution yet once it begins observations later this year. And with a better window on the cosmic battle between dark matter and dark energy, Rubin might narrow down existing theories on what they are made of. Here’s a look at how.

Untangling dark matter’s web

In the 1930s, the Swiss astronomer Fritz Zwicky proposed the existence of an unseen force named dunkle Materie—in English, dark matter—after studying a group of galaxies called the Coma Cluster. Zwicky found that the galaxies were traveling too quickly to be contained by their joint gravity and decided there must be a missing, unobservable mass holding the cluster together.

Zwicky’s theory was initially met with much skepticism. But in the 1970s an American astronomer, Vera Rubin, obtained evidence that significantly strengthened the idea. Rubin studied the rotation rates of 60 individual galaxies and found that if a galaxy had only the mass we’re able to observe, that wouldn’t be enough to contain its structure; its spinning motion would send it ripping apart and sailing into space. 

Rubin’s results helped sell the idea of dark matter to the scientific community, since an unseen force seemed to be the only explanation for these spiraling galaxies’ breakneck spin speeds. “It wasn’t necessarily a smoking-gun discovery,” says Marc Kamionkowski, a theoretical physicist at Johns Hopkins University. “But she saw a need for dark matter. And other people began seeing it too.”

Evidence for dark matter only grew stronger in the ensuing decades. But sorting out what might be behind its effects proved tricky. Various subatomic particles were proposed. Some scientists posited that the phenomena supposedly generated by dark matter could also be explained by modifications to our theory of gravity. But so far the hunt, which has employed telescopes, particle colliders, and underground detectors, has failed to identify the culprit. 

The Rubin observatory’s main tool for investigating dark matter will be gravitational lensing, an observational technique that’s been used since the late ’70s. As light from distant galaxies travels to Earth, intervening dark matter distorts its image—like a cosmic magnifying glass. By measuring how the light is bent, astronomers can reverse-engineer a map of dark matter’s distribution. 

Other observatories, like the Hubble Space Telescope and the James Webb Space Telescope, have already begun stitching together this map from their images of galaxies. But Rubin plans to do so with exceptional precision and scale, analyzing the shapes of billions of galaxies rather than the hundreds of millions that current telescopes observe, according to Andrés Alejandro Plazas Malagón, Rubin operations scientist at SLAC National Laboratory. “We’re going to have the widest galaxy survey so far,” Plazas Malagón says.

Capturing the cosmos in such high definition requires Rubin’s 3.2-billion-pixel Large Synoptic Survey Telescope (LSST). The LSST boasts the largest focal plane ever built for astronomy, granting it access to large patches of the sky. 

The telescope is also designed to reorient its gaze every 34 seconds, meaning astronomers will be able to scan the entire sky every three nights. The LSST will revisit each galaxy about 800 times throughout its tenure, says Steven Ritz, a Rubin project scientist at the University of California, Santa Cruz. The repeat exposures will let Rubin team members more precisely measure how the galaxies are distorted, refining their map of dark matter’s web. “We’re going to see these galaxies deeply and frequently,” Ritz says. “That’s the power of Rubin: the sheer grasp of being able to see the universe in detail and on repeat.”

The ultimate goal is to overlay this map on different models of dark matter and examine the results. The leading idea, the cold dark matter model, suggests that dark matter moves slowly compared to the speed of light and interacts with ordinary matter only through gravity. Other models suggest different behavior. Each comes with its own picture of how dark matter should clump in halos surrounding galaxies. By plotting its chart of dark matter against what those models predict, Rubin might exclude some theories and favor others. 

A cosmic tug of war

If dark matter lies on one side of a magnet, pulling matter together, then you’ll flip it over to find dark energy, pushing it apart. “You can think of it as a cosmic tug of war,” Plazas Malagón says.

Dark energy was discovered in the late 1990s, when astronomers found that the universe was not only expanding, but doing so at an accelerating rate, with galaxies moving away from one another at higher and higher speeds. 

“The expectation was that the relative velocity between any two galaxies should have been decreasing,” Kamionkowski says. “This cosmological expansion requires something that acts like antigravity.” Astronomers quickly decided there must be another unseen factor inflating the fabric of space and pegged it as dark matter’s cosmic foil. 

So far, dark energy has been observed primarily through Type Ia supernovas, a special breed of explosion that occurs when a white dwarf star accumulates too much mass. Because these supernovas all tend to have the same peak in luminosity, astronomers can gauge how far away they are by measuring how bright they appear from Earth. Paired with a measure of how fast they are moving, this data clues astronomers in on the universe’s expansion rate. 

Rubin will continue studying dark energy with high-resolution glimpses of Type Ia supernovas. But it also plans to retell dark energy’s cosmic history through gravitational lensing. Because light doesn’t travel instantaneously, when we peer into distant galaxies, we’re really looking at relics from millions to billions of years ago—however long it takes for their light to make the lengthy trek to Earth. Astronomers can effectively use Rubin as a makeshift time machine to see how dark energy has carved out the shape of the universe. 

“These are the types of questions that we want to ask: Is dark energy a constant? If not, is it evolving with time? How is it changing the distribution of dark matter in the universe?” Plazas Malagón says.

If dark energy was weaker in the past, astronomers expect to see galaxies grouped even more densely into galaxy clusters. “It’s like urban sprawl—these huge conglomerates of matter,” Ritz says. Meanwhile, if dark energy was stronger, it would have pushed galaxies away from one another, creating a more “rural” landscape. 

Researchers will be able to use Rubin’s maps of dark matter and the 3D distribution of galaxies to plot out how the structure of the universe changed over time, unveiling the role of dark energy and, they hope, helping scientists evaluate the different theories to account for its behavior. 

Of course, Rubin has a lengthier list of goals to check off. Some top items entail tracing the structure of the Milky Way, cataloguing cosmic explosions, and observing asteroids and comets. But since the observatory was first conceptualized in the early ’90s, its core goal has been to explore this hidden branch of the universe. After all, before a 2019 act of Congress dedicated the observatory to Vera Rubin, it was simply called the Dark Matter Telescope. 

Rubin isn’t alone in the hunt, though. In 2023, the European Space Agency launched the Euclid telescope into space to study how dark matter and dark energy have shaped the structure of the cosmos. And NASA’s Nancy Grace Roman Space Telescope, which is scheduled to launch in 2027, has similar plans to measure the universe’s expansion rate and chart large-scale distributions of dark matter. Both also aim to tackle that looming question: What makes up this invisible empire?

Rubin will test its systems throughout most of 2025 and plans to begin the LSST survey late this year or in early 2026. Twelve to 14 months later, the team expects to reveal its first data set. Then we might finally begin to know exactly how Rubin will light up the dark universe.