Integrating security from code to cloud

The Human Genome Project, SpaceX’s rocket technology, and Tesla’s Autopilot system may seem worlds apart in form and function, but they all share a common characteristic: the use of open-source software (OSS) to drive innovation.

Offering publicly accessible code that can be viewed, modified, and distributed freely, OSS expedites developer productivity and creates a collaborative space for groundbreaking advancements.

“Open source is critical,” says David Harmon, director of software engineering for AMD. “It provides an environment of collaboration and technical advancements. Savvy users can look at the code themselves; they can evaluate it; they can review it and know that the code that they’re getting is legit and functional for what they’re trying to do.”

But OSS can also compromise an organization’s security posture by introducing hidden vulnerabilities that fall under the radar of busy IT teams, especially as cyberattacks targeting open source are on the rise. OSS may contain weaknesses, for example, that can be exploited to gain unauthorized access to confidential systems or networks. Bad actors can even intentionally introduce into OSS a space for exploits—“backdoors”—that can compromise an organization’s security posture. 

“Open source is an enabler to productivity and collaboration, but it also presents security challenges,” says Vlad Korsunsky, corporate vice president of cloud and enterprise security for Microsoft. Part of the problem is that open source introduces into the organization code that can be hard to verify and difficult to trace. Organizations often don’t know who made changes to open-source code or the intent of those changes, factors that can increase a company’s attack surface.

Complicating matters is that OSS’s increasing popularity coincides with the rise of cloud and its own set of security challenges. Cloud-native applications that run on OSS, such as Linux, deliver significant benefits, including greater flexibility, faster release of new software features, effortless infrastructure management, and increased resiliency. But they also can create blind spots in an organization’s security posture, or worse, burden busy development and security teams with constant threat signals and never-ending to-do lists of security improvements.

“When you move into the cloud, a lot of the threat models completely change,” says Harmon. “The performance aspects of things are still relevant, but the security aspects are way more relevant. No CTO wants to be in the headlines associated with breaches.”

Staying out of the news, however, is becoming increasingly more difficult: According to cloud company Flexera’s State of the Cloud 2024 survey, 89% of enterprises use multi-cloud environments. Cloud spend and security top respondents’ lists of cloud challenges. Security firm Tenable’s 2024 Cloud Security Outlook reported that 95% of its surveyed organizations suffered a cloud breach during the 18 months before their survey.

Code-to-cloud security

Until now, organizations have relied on security testing and analysis to examine an application’s output and identify security issues in need of repair. But these days, addressing a security threat requires more than simply seeing how it is configured in runtime. Rather, organizations must get to the root cause of the problem.

It’s a tall order that presents a balancing act for IT security teams, according to Korsunsky. “Even if you can establish that code-to-cloud connection, a security team may be reluctant to deploy a fix if they’re unsure of its potential impact on the business. For example, a fix could improve security but also derail some functionality of the application itself and negatively impact employee productivity,” he says.

Rather, to properly secure an application, says Korsunsky, IT security teams should collaborate with developers and application security teams to better understand the software they’re working with and to determine the impacts of applying security fixes.

Fortunately, a code-to-cloud security platform with comprehensive cloud-native security can help by identifying and stopping software vulnerabilities at the root. Code-to-cloud creates a pipeline between code repositories and cloud deployment, linking how the application was written to how it performs—“connecting the things that you see in runtime to where they’re developed and how they’re deployed,” says Korsunsky.

The result is a more collaborative and consolidated approach to security that enables security teams to identify a code’s owner and to work with that owner to make an application more secure. This ensures that security is not just an afterthought but a critical aspect of the entire software development lifecycle, from writing code to running it in the cloud.

Better yet, an IT security team can gain complete visibility into the security posture of preproduction application code across multi-pipeline and multi-cloud environments while, at the same time, minimizing cloud misconfigurations from reaching production environments. Together, these proactive strategies not only prevent risks from arising but allow IT security teams to focus on critical emerging threats.

The path to security success

Making the most of a code-to-cloud security platform requires more than innovative tools. Establishing best practices in your organization can ensure a stronger, long-term security posture.

Create a comprehensive view of assets: Today’s organizations rely on a wide array of security tools to safeguard their digital assets. But these solutions must be consolidated into a single pane of glass to manage exposure of the various applications and resources that operate across an entire enterprise, including the cloud. “Companies can’t have separate solutions for separate environments, separate cloud, separate platforms,” warns Korsunsky. “At the end of the day, attackers don’t think in silos. They’re after the crown jewels of an enterprise and they’ll do whatever it takes to get those. They’ll move laterally across environments and clouds—that’s why companies need a consolidated approach.”

Take advantage of artificial intelligence (AI): Many IT security teams are overwhelmed with incidents that require immediate attention. That’s all the more reason for organizations to outsource straightforward security tasks to AI. “AI can sift through the noise so that organizations don’t have to deploy their best experts,” says Korsunsky. For instance, by leveraging its capabilities for comparing and distinguishing written texts and images, AI can be used as a copilot to detect phishing emails. After all, adds Korsunsky, “There isn’t much of an advantage for a human being to read long emails and try to determine whether or not they’re credible.” By taking over routine security tasks, AI frees employees to focus on more critical activities.

Find the start line: Every organization has a long list of assets to secure and vulnerabilities to fix. So where should they begin? “Protect your most critical assets by knowing where your most critical data is and what’s effectively exploitable,” recommends Korsunsky. This involves conducting a comprehensive inventory of a company’s assets and determining how their data interconnects and what dependencies they require.

Protect data in use: The Confidential Computing Consortium is a community, part of the Linux Foundation, focused on accelerating the adoption of confidential computing through open collaboration. Confidential computing can protect an organization’s most sensitive data during processing by performing computations in a hardware-based Trusted Execution Environment (TEE), such as Azure confidential virtual machines based on AMD EPYC CPUs. By encrypting data in memory in a TEE, organizations can ensure that their most sensitive data is only processed after a cloud environment has been verified, helping prevent data access by cloud providers, administrators, or unauthorized users.

A solution for the future As Linux, OSS, and cloud-native applications continue to increase in popularity, so will the pressure on organizations to prioritize security. The good news is that a code-to-cloud approach to cloud security can empower organizations to get a head start on security—during the software development process—while providing valuable insight into an organization’s security posture and freeing security teams to focus on business-critical tasks.

Secure your Linux and open source workloads from code to cloud with Microsoft Azure and AMD. Learn more about Linux on Azure  and Microsoft Security.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

New Ecommerce Tools: September 5, 2024

Every week we publish a rundown of new products from companies offering services to ecommerce and omnichannel merchants. This installment includes updates on holiday selling, recommerce platforms, AI-powered sales assistants, payments, search optimization, and last-mile delivery management.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants: September 5, 2024

Walmart expands Marketplace with premium products, collectibles, preowned items. Walmart is growing its Marketplace assortment by adding premium beauty and expanding its range of collectibles and pre-owned merchandise. “Resold at Walmart” is the company’s new digital destination for pre-owned items. In the Collector shop, sellers can now enable preorders to build customer anticipation for drops, with pre-owned conditions for collectibles under Resold at Walmart. Eligible sellers can take advantage of 0% commission fees for collectibles through October 31.

Web page of Walmart Marketplace

Walmart Marketplace

Onfleet partners with Shopify to enhance shopping and delivery experience. Onfleet, a last-mile delivery management platform, and Shopify have announced a partnership. The integration will streamline order processing for merchants by creating Onfleet delivery tasks automatically from Shopify fulfillment orders. Shopify merchants have visibility into real-time delivery status, including confirmation that a delivery is scheduled, when it left the store, and when it arrived at the customer.

BigCommerce enhances platform with AI tools powered by Google. BigCommerce has partnered with Google to develop AI-powered tools for brands. BigAI Product Recommendations, powered by Google AI, enable brands to offer real-time personalized recommendations. With the release of BigAI Copywriter, marketers can optimize existing product descriptions or create search-engine-optimized descriptions with brand language and tone. BigAI predictive analytics offer insights into the future lifetime value of new shoppers, and BigCommerce customers soon will be able to send custom quote proposal emails powered by generative AI.

Carbon6 acquires the Junglytics AI-powered platform for Amazon sellers. Carbon6, a platform for Amazon businesses, has announced its acquisition of Junglytics, an AI-powered business intelligence product to enhance data-driven decision-making for sellers. This acquisition strengthens Carbon6’s capabilities by integrating Junglytics’ AI technology with Carbon6’s data platform. The Junglytics platform and AI assistant provide real-time data analysis and actionable insights through a conversational interface, simplifying how sellers interact with Amazon’s marketplace. Terms of the acquisition were not disclosed.

Home page of Junglytics

Junglytics

Reboxed launches circularity-as-a-service recommerce platform for brands. Circular economy startup Reboxed has announced the launch of ReboxedOS for consumer electronics recommerce. ReboxedOS is a circularity-as-a-service solution that allows businesses to integrate resale, trade-in, and recycling programs without building their own infrastructure. Launched initially as a direct-to-consumer marketplace for tech, Reboxed created a suite of services to help brands and retailers turn reuse into revenue.

Amazon invests $20 million in India-based BNPL firm Axio. India-based buy-now-pay-later firm Axio has received a $20 million equity investment from Amazon’s Smbhav Venture Fund. Formed from a merger of Capital Float, Walnut, and Walnut369, Axio enables ecommerce storefronts to embed credit financing and money management tools at the checkout. Axio, which claims 9 million credit customers and 3,000 merchants, provides the backbone for pay-later services for Amazon in India, offering shoppers payment installments of 3 to 12 months.

Rep AI raises $8.2 million for AI sales concierge. Rep AI, creator of the AI Concierge sales assistance platform, has raised $8.2 million, led by Osage Venture Partners. The investment will fuel further development of the company’s ecommerce sales tool. Leveraging behavioral AI and large language models, Rep AI enables online merchants to provide personalized sales assistance at scale. Rep AI’s technology deploys behavioral AI to analyze each shopper’s browsing patterns, proactively offering help and providing tailored product specifications and personalized recommendations.

Home page of Rep

Rep AI

Google updates tools for retailers this holiday season. At its recent Think Retail virtual event, Google released updates for the holidays. Users of the new Merchant Center can soon explore shopping trends and popular shopping queries, ranked by popularity and organized by topic and product. Google says it will add two generative AI insights features to the Merchant Center: (i) insight summaries at the top of the analytics tab for product performance and (ii) a tool that translates user requests into custom reports with performance data.

Etsy introduces search visibility page favoring low-cost shipping. Etsy has announced a “search visibility page” in Shop Manager, offering more insights about how merchants appear in search along with recommended actions to help improve search performance. In addition, starting October 1, Etsy will factor U.S. domestic shipping prices into search for U.S. listings, excluding certain categories and items shipped from outside the U.S.

Ikea launches second-hand marketplace. Ikea is launching a second-hand online marketplace so that customers can sell to each other. Ikea Preowned is up and running in Madrid and Oslo. The company plans to roll out the site globally by December. The seller puts up listings on Ikea Preowned, and Ikea’s algorithms generate the item’s details, including measurements and the original retail price.

PayJunction expands payment capabilities with text-to-pay. PayJunction, a tech-focused payments company, has added text-to-pay capabilities to its no-code payments integration, virtual terminal, and API. According to the company, the feature enables businesses to implement text-to-pay into current invoicing workflows, allowing them to accept customer payments via SMS without handling cardholder information.

Home page of PayJunction

PayJunction

Optimized Landing Pages Reduce Ad Costs

Optimizing a website for organic search rankings is well-known. Less common is optimizing for ad engines, crucial for managing the cost and effectiveness of paid traffic.

Ad targeting on social media and search engines relies on algorithms that bid for placements on advertisers’ behalf. Algorithms evaluate an ad and its linked landing page, assigning a relevancy score that influences the cost and performance. Meta calls the score “Quality Rank”; Google refers to it as “Ad Rank.” Both prioritize the relevance of an ad to the landing page.

Bidding engines on Meta, Google, and others optimize around a goal, typically conversions (for ecommerce). Low click rates on ads drive up their costs, as do landing experiences with low conversions.

But an optimized experience can reduce ad costs. Note the chart below showing my firm’s A/B test on Meta. The test traffic was split: half went to a standard ecommerce product page and half to an ad-engine-optimized version. Both used the same ad and budget. The optimized experience received 28% more traffic due to lower per-impression ad costs and drove 99% more revenue than the product page.

3 AEO Hacks

Ad engine optimization (AEO) prioritizes the relevance of a landing page to an ad. Little relevance produces high bounce rates and low conversions. Ad engines detect this and penalize the campaign with higher ad costs. Custom landing pages are the traditional fix. They work well for bottom-of-funnel and long-running campaigns but not for top-of-funnel customer acquisition, especially on social media.

For example, a shopper searching Google for a “red four-slot toaster” has a clear purchase intent and would likely convert from a landing page featuring red four-slot toasters.

But a consumer on Facebook might see an aspirational kitchen ad that happens to feature a red four-slot toaster, requiring a different landing approach.

Consider these AEO hacks:

  • Mirror campaign creative. The simplest way to increase relevance is to mirror the images in the ad onto the landing page. This reassures shoppers they are in the right place, preventing an early bounce.
  • Make it easy to buy all of the products. The top reason shoppers bounce is they can’t find the product, or it looks different. Incredibly, landing pages without the promoted products are common. It’s especially problematic on mobile, where the product may be swipes away from where the shopper lands. Hence ensure the landing page shows all products in the ad, not just the category or one item.
  • Enable all intents. An ad showing a model on a beach wearing a t-shirt, shorts, and a hat could drive multiple interests: the t-shirt, shorts, summer, beachwear, or more. A landing experience focused on just one of those items will result in high bounces.

An optimized page makes it easy to buy, say, the advertised t-shirt while providing easy navigation to the other products. The page simultaneously shows all products in the ad and draws visitors into the broader category. The more visitors that browse, the more signals return to the ad engine, increasing the relevance score and lowering ad costs.

Algorithm Driven

To function, optimized pages require implementing the ad engine’s conversion API (“CAPI” for Meta and Google) to send the engagement signals. Testing and evolving the AEO experience is helpful — manually or automated.

Advertising on Meta, Google, and other channels is now algorithm-driven. Improving performance means optimizing for the algorithm. The combined quality of the ad and landing page directly impacts ad costs.

Google May Unify Schema Markup & Merchant Center Feed Data via @sejournal, @MattGSouthern

Google revealed it’s working to bridge the gap between two key product data sources that power its shopping results – website markup using schema.org structured data and product feeds submitted via Google Merchant Center.

The initiative, mentioned during a recent “Search Off The Record” podcast episode, aims to achieve one-to-one parity between the product attributes supported by schema.org’s open-source standards and Google’s merchant feed specifications.

Leveraging Dual Product Data Pipelines

In search results, Google leverages structured data markup, and Merchant Center product feeds to surface rich product listings.

Irina Tuduce, a longtime Google employee involved with the company’s shopping search infrastructure, says merchants should utilize both options.

Tuduce stated:

“We recommend doing both. Because, as I said, in signing up on the Merchant Center UI, you make sure some of your inventory, the one that you specify, will be in the Shopping results. And you can make sure you’ll be on dotcom on the Shopping tab and Image tab.

And then, if you specify how often you want us to refresh your data, then you can be sure that that information will be refreshed. Otherwise, yeah, you don’t know when we will have the resources to recrawl you and update that information.”

Meanwhile, implementing schema.org markup allows Google to extract product details from websites during the crawling process.

Reconciling Markup and Feed Discrepancies

However, discrepancies can arise when the product information in a merchant’s schema.org markup doesn’t perfectly align with the details provided via their Merchant Center feed uploads.

Tuduce explained

“If you don’t have the schema.org markup on your page, we’ll probably stick to the inventory that you specify in your feed specification.”

Google’s initiative aims to resolve such discrepancies.

Simplifying Merchant Product Data Management

Unifying the product attributes across both sources aims to simplify data management and ensure consistent product listings across Google.

Regarding the current inconsistencies between schema.org markup and merchant feed specifications, Tuduce says:

“The attributes overlap to a big extent, but there are still gaps that exist. We will want to address those gaps.”

As the effort progresses, Google plans to keep marketers informed by leveraging schema.org’s active GitHub community and opening the update process to public feedback.

The unified product data model could keep product details like pricing, availability, and variant information consistently updated and accurately reflected across Google’s search results.

Why This Matters

For merchants, consistent product listings with accurate, up-to-date details can boost visibility in Google’s shopping experiences. Streamlined data processes also mean less redundant work.

For consumers, a harmonized system translates to more relevant, trustworthy shopping journeys.

What You Can Do Now

  • Audit current product data across website markup and merchant feeds for inconsistencies.
  • Prepare to consolidate product data workflows as Google’s unified model rolls out.
  • Implement richer product schema markup using expanded vocabulary.
  • Monitor metrics like impressions/clicks as consistent data surfaces.
  • Prioritize product data hygiene and frequent catalog updates.

By aligning your practices with Google’s future plans, you can capitalize on new opportunities for streamlined product data management and enhanced shopping search visibility.

Hear the full discussion below, starting around the 12-minute mark:

New LiteSpeed Cache Vulnerability Puts 6 Million Sites at Risk via @sejournal, @martinibuster

Another vulnerability was discovered in the LiteSpeed Cache WordPress plugin—an Unauthenticated Privilege Escalation that could lead to a total site takeover. Unfortunately, updating to the latest version of the plugin may not be enough to resolve the issue.

LiteSpeed Cache Plugin

The LiteSpeed Cache Plugin is a website performance optimization plugin that has over 6 million installations. A cache plugin stores a static copy of the data used to create a web page so that the server doesn’t have to repeatedly fetch the exact same page elements from the database every time a browser requests a web page.

Storing the page in a “cache” reduced the server load and speeds up the time it takes to deliver a web page to a browser or a crawler.

LiteSpeed Cache also does other page speed optimizations like compressing CSS and JavaScript files (minifying), puts the most important CSS for rendering a page in the HTML code itself (inlined CSS) and other optimizations that together make a site faster.

Unauthenticated Privilege Escalation

An unauthenticated privilege escalation is a type of vulnerability that allows a hacker to attain site access privileges without having to sign in as a user. This makes it easier to hack a site in comparison to an authenticated vulnerability that requires a hacker to first attain a certain privilege level before being able to execute the attack.

Unauthenticated privilege escalation typically occurs because of a flaw in a plugin (or theme) and in this case it’s a data leak.

Patchstack, the security company that discovered the vulnerability writes that vulnerability can only be exploited under two conditions:

“Active debug log feature on the LiteSpeed Cache plugin.

Has activated the debug log feature once before (not currently active now) and the /wp-content/debug.log file is not purged or removed.”

Discovered By Patchstack

The vulnerability was discovered by researchers at Patchstack WordPress security company, which offers a free vulnerability warning service and advanced protection for as little as $5/month.

Oliver Sild Founder of Patchstack explained to Search Engine Journal how this vulnerability was discovered and warned that updating the plugin is not enough, that a user still needs to manually purge their debug logs.

He shared these specifics about the vulnerability:

“It was found by our internal researcher after we processed the vulnerability from a few weeks ago.

Important thing to keep in mind with this new vulnerability is that even when it gets patched, the users still need to purge their debug logs manually. It’s also a good reminder not to keep debug mode enabled in production.”

Recommended Course of Action

Patchstack recommends that users of LiteSpeed Cache WordPress plugin update to at least version 6.5.0.1.

Read the advisory at Patchstack:

Critical Account Takeover Vulnerability Patched in LiteSpeed Cache Plugin

Featured Image by Shutterstock/Teguh Mujiono

August Update from IAB Shows Ad Spend & Opportunities For Growth In 2024 via @sejournal, @gregjarboe

This morning, The Outlook Study: August Update has been released by the IAB as an update to their initial November 2023 study and provides a snapshot of projected ad spend, opportunities, and challenges for the remainder of 2024.

The study outlines the shifts that have occurred throughout the year, capturing current perspectives from buy-side ad investment decision-makers at brands and agencies.

Here are some of the key takeaways for digital marketers:

  • Buyers increased their 2024 ad spend projections from +9.5% projected at the end of 2023 to +11.8% today.
  • Nearly all channels are expected to post higher growth rates year-over-year (YoY), with even Linear TV rebounding.
  • Retail media’s ascent continues, with buyers revising YoY projections from +21.8% to +25.1%.
  • Buyers continue to focus on cross-funnel KPIs while shifting efforts towards reach optimization as interest in new KPIs wanes.
  • Measurement challenges persist for the industry, while economic concerns subside.

In other words, it’s time to spring forward, not fall back, in the media and marketing industries.

Buyers’ Ad Spending Forecasts For 2024 Have Been Revised Upward

The increase in projections is not what many digital marketers were expecting, so what is happening in the changing industry landscape?

Increased ad spending in the second half of 2024 is being driven by increased political spending around the presidential election and other cyclical events, such as the Summer Olympic games.

Based on IAB’s recent email survey of 200 buy-side ad investment decision-makers, primarily at brands and agencies, nearly all channels are expected to post higher growth rates YoY.

Yes, even linear TV is now expected to grow 4.3%, but nine other channels are expected to grow at even faster rates:

  • Connected TV (CTV) by 18.4%.
  • Social media by 16.3%.
  • Paid search by 13.1%.
  • Podcasts by 12.6%.
  • Digital video excluding CTV by 12.5%.
  • Digital out-of-home (OOH) by 8.9%.
  • Digital audio, excluding podcasts, by 8.3%.
  • Digital display by 7.4%.
  • Gaming by 5.1%.

Why Is Retail Media Expected To Continue Growing?

Buyers – particularly in the consumer-packed goods (CPG) and the beauty categories – are set to surge in the U.S. this year, pushing overall retail media ad spending to reach one-fifth of the total 2024 ad spend.

Okay, these are the reasons to spring forward, even if we’re on the verge of fall. But there are a couple of challenges that digital marketers still face.

For example, there’s been a decline in focus on new ad KPIs (e.g., attention metrics, weighted CAC, etc.), which suggests there’s been a renewed interest in refining and leveraging established metrics to achieve cross-funnel goals.

However, goals can vary by channel.

As I mentioned this summer in “Business Outcomes Are The Top KPI Of Video Ad Buyers – IAB Report Part Two,” IAB’s latest Digital Video report found that within the digital video channel, buyers are determining success via business outcomes, i.e., sales, store/website visits, etc.

So, figuring out how to use Google Analytics 4 (GA4) to measure business outcomes instead of marketing outputs remains “the road less traveled.”

Understanding Evolving Consumer Habits Is A Growing Concern

While economic worries have faded, the concern over executing cross-channel media measurement has risen.

The resilient economy, marked by a 2.3% rise in consumer spending in Q2 2024, has eased buyers’ concerns.

But, as media convergence gains traction, cross-channel measurement remains a top priority, especially for large advertisers that spend over $50 million annually.

Other concerns, like managing reach and frequency across screens and channels, as well as media inflation, have remained flat.

Understanding evolving consumer habits is a growing concern – and is keeping significantly more buyers up at night than it did last year.

It does seem like it’s time to spring forward in the media and marketing industries, although this has traditionally been the season when digital marketers prepare to fall back.

All data above has been taken from The 2024 Outlook Study: August Update – A Snapshot into Ad Spend, Opportunities, and Strategies for Growth by the IAB. The study is a follow-up to the initial November 2023 release, providing current perspectives from 200 buy-side ad investment decision makers at brands and agencies.

More resources: 


Featured Image: SeventyFour/Shutterstock

SearchGPT vs. Google: Early Analysis & User Feedback via @sejournal, @MattGSouthern

OpenAI, the company behind ChatGPT, has introduced a prototype of SearchGPT, an AI-powered search engine.

The launch has sparked considerable interest, leading to discussions about its potential to compete with Google.

However, early studies and user feedback indicate that while SearchGPT shows promise, it has limitations and needs more refinement.

Experts suggest it needs further development before challenging current market leaders.

Study Highlights SearchGPT’s Strengths and Weaknesses

SE Ranking, an SEO software company, conducted an in-depth analysis of SearchGPT’s performance and compared it to Google and Bing.

The study found that SearchGPT’s search results are 73% similar to Bing’s but only 46% similar to Google’s.

Interestingly, 26% of domains ranking in SearchGPT receive no traffic from Google, indicating opportunities for websites struggling to gain traction.

The study highlighted some of SearchGPT’s key features, including:

  • The ability to summarize information from multiple sources Provide a conversational interface for refining searches Offering an ad-free user experience.
  • However, the research noted that SearchGPT lacks the variety and depth of Google’s search results, especially for navigational, transactional, and local searches.
  • The study also suggested that SearchGPT favors authoritative, well-established websites, with backlinks being a significant ranking factor.

Around 32% of all SearchGPT results came from media sources, increasing to over 75% for media-related queries.

SE Ranking notes that SearchGPT needs improvement in providing the latest news, as some news results were outdated.

User Experiences & Limitations Reported By The Washington Post

The Washington Post interviewed several early testers of SearchGPT and reported mixed reviews.

Some users praised the tool’s summarization capabilities and found it more helpful than Google’s AI-generated answers for certain queries.

Others, however, found SearchGPT’s interface and results less impressive than those of smaller competitors like Perplexity.

The article also highlighted instances where SearchGPT provided incorrect or “hallucinated” information, a problem that has plagued other AI chatbots.

While the SE Ranking study estimated that less than 1% of searches returned inaccurate results, The Washington Post says there’s significant room for improvement.

The article also highlighted Google’s advantage in handling shopping and local queries due to its access to specialized data, which can be expensive to acquire.

Looking Ahead: OpenAI’s Plans For SearchGPT and Potential Impact on the Market

OpenAI spokesperson Kayla Wood revealed that the company plans to integrate SearchGPT’s best features into ChatGPT, potentially enhancing the popular language model’s capabilities.

When asked about the possibility of including ads in SearchGPT, Wood stated that OpenAI’s business model is based on subscriptions but didn’t specify whether SearchGPT would be offered for free or as part of a ChatGPT subscription.

Despite the excitement surrounding SearchGPT, Google CEO Sundar Pichai recently reported continued growth in the company’s search revenue, suggesting that Google may maintain its dominant position even with the emergence of new AI-powered search tools.

Top Takeaways

Despite its current limitations, SearchGPT has the potential to shake up online information seeking. As OpenAI iterates based on user feedback, its impact may grow significantly.

Integrating SearchGPT’s best features into ChatGPT could create a more powerful info-seeking tool. The proposed subscription model raises questions about competition with free search engines and user adoption.

While Google’s search revenue and specialized query handling remain strong, SearchGPT could carve out its own niche. The two might coexist, serving different user needs.

For SearchGPT to truly compete, OpenAI must address accuracy issues, expand query capabilities, and continuously improve based on user input. It could become a viable alternative to traditional search engines with ongoing development.


Featured Image: Robert Way/Shutterstock

How To Create High-Quality Content via @sejournal, @sejournal

SEO success depends on providing high-quality content to your audiences. The big question is: What exactly does “high quality” mean?

Content has many meanings. In digital marketing, it simply means the information a website displays to users.

But don’t forget: In a different context with a different emphasis on the word (content as opposed to content), content is a synonym for happy and satisfied. The meaning is different, but the letters are the same.

If you want to understand content quality online, keep these two different definitions in mind.

Every webpage has content. “High-quality” content depends on contexts like:

  • What the needs of your audience are.
  • What users expect to find.
  • How the content is presented and how easy it is to pull critical information out of it quickly.
  • How appropriate the medium of the content is for users’ needs.

What Makes Content High Quality?

This is a complex question that we hope to answer in full during this article. But let’s start with a simple statement:

High-quality content is whatever the user needs at the time they’re looking for it.

This might not be helpful in a specific sense but note this somewhere because it’s a guiding light that has far-reaching implications for your website and audience strategy.

We use this definition because the quality of your content isn’t static. Google and other search engines know this and frequently update search engine results pages (SERPs) and algorithms to adjust for changing user priorities.

You need to bake this idea into your understanding of content and audiences. You can have the most beautifully written, best-formatted content, but if your target audience doesn’t need that information in that format, it’s not “high-quality” for SEO.

If you provide a story when the user is looking for a two-sentence answer, then you’re not serving their interests.

This is especially pertinent with the introduction of generative AI features into search platforms. This is a continuation of a “zero click” phenomenon for certain types of searches and why Google doesn’t send a user to a website for these searches.

Defining & Meeting Audience Needs

SEO professionals have many different ways of conceptualizing these ideas. One of the most common is “the funnel,” which categorizes content into broad categories based on its position in a marketing journey.

The funnel is usually categorized something like this:

  • Top of the funnel: Informational intent and awareness-building content.
  • Middle of the funnel: Consideration intent and product/service-focused content.
  • Bottom of the funnel: Purchase intent and conversion content.

While it’s helpful to categorize types of content by their purpose in your marketing strategy, this can be an overly limiting view of user intent and encourages linear thinking when you conceptualize user journeys.

As Google gets more specific about intent, such broad categorization becomes less helpful in determining whether content meets users’ needs.

Build a list of verbs that describe the specific needs of your audience while they’re searching. Ideally, you should base this on audience research and data you have about them and their online activity.

Learn who they follow, what questions they ask, when a solution seems to satisfy them, what content they engage with, etc.

Then, create verb categories to apply to search terms during your keyword research. For example:

  • Purchase.
  • Compare.
  • Discover.
  • Learn.
  • Achieve.
  • Check.

User Intends To Purchase

If the user is looking for something to buy, then high quality probably looks like a clean landing or product page that’s easy to navigate. Be sure to include plenty of detail so search engines can match your page to specific parameters the user might enter or have in their search history.

Product photos and videos, reviews and testimonials, and Schema markup can all help these pages serve a better experience and convert. Pay particular attention to technical performance and speed.

Remember that you’re highly likely to go up against ads on the SERPs for these queries, and driving traffic to landing pages can be difficult.

User Intends To Compare

This could take a couple of different forms. Users might come to you for reviews and comparisons on other things or to compare your benefits to those of another company.

For this content to be successful, you need to be dialed into what problems a user is trying to solve, what pain points they have, and how specific differences impact their outcomes.

This is the old “features vs. benefits” marketing argument, but the answer is “both.” Users could want to see all the features listed, but don’t forget to contextualize how those features solve specific problems.

User Intends To Discover

This intent could describe a user looking for industry news, data to support their research, or new influencers to follow.

Prioritize the experience they’re seeking and ensure that the discovery happens quickly.

This could look like adding text summaries or videos to the top of posts, tables of contents to assist with navigation, or page design elements that highlight the most critical information.

User Intends To Learn

If a user intends to learn about a topic, a long, well-organized post, video, or series of either may serve them best. This content should be in-depth, well-organized, and written by genuine topic experts. You may need to demonstrate the author’s qualifications to build trust with readers.

You must consider the existing knowledge level of your target audience. Advanced content will not satisfy the needs of inexperienced users, while basic content will bore advanced users.

Don’t try to satisfy both audiences in a single experience. It’s tempting to include basic questions in this type of content to target more SEO keywords, but think about whether you’re trading keywords for user experience.

For example, if you write a post about “how to use a straight razor” and your subheadings look like the ones below, you’re probably not serving the correct intent.

  • What is a straight razor?
  • Are straight razors dangerous?
  • Should I use a straight razor?

The chances are high that someone landing on your page “how to use a straight razor” doesn’t need answers to these basic questions. In other words, you’re wasting their time.

User Intends To Achieve

A slightly different intent from learning. In this instance, a user has a specific goal for an action they want to perform. Like learning content, it should be written by subject matter experts.

If the person creating this content doesn’t have sufficient first-hand experience, they won’t effectively guide users and predict their real-world needs. This results in unsatisfying content and a failure point of many SEO content strategies.

In SEJ’s SEO Trends 2024 ebook, Mordy Oberstein, Head of SEO Brand at Wix, said:

“One trend I would get ahead of that aligns with Google’s focus on expertise and experience is what I’m coining “situational content.” Situational content attempts to predict the various outcomes of any advice or the like offered within the content to present the next logical steps. If, for example, a piece of content provides advice about how to get a baby to sleep through the night, it would then offer the next steps if that advice didn’t work.

This is “situational” – if X doesn’t work, you might want to try Y. Situational content creates a compelling form of content I see more frequently. It does a few things for the reader:

  • It addresses them and their needs directly.
  • It’s more conversational than standard content (an emerging content
    trend itself).
  • To predict various outcomes and situations, you have to actually know what
    you’re talking about.

That latter point directly addresses E-E-A-T. You can only predict and address secondary situations with expertise and experience. Most of all, situational content indicates to the user that a real person, not a large language model (LLM), wrote it.”

The difference between “learn” and “achieve” intents can be difficult to see. Sometimes, you might need to satisfy both. Pay careful attention to these types of content.

User Intends To Check

Misunderstanding when a user just wants to “check” something can cause you to waste resources on content doomed not to perform, and another failure point of SEO strategies. If what a user needs can be solved in a few sentences, you’re in zero-click territory.

For example, ‘How to tie a bowtie’.

That is, Google will serve users an answer on the SERP, and they may not click a link at all. You may want to target these types of queries as part of longform content for other search intents using good content organization and Schema markup.

That way, you can give your authoritative and in-depth content opportunities to show up in rich results on SERPs, and users might click through if they see more information available or have follow-up questions.

You should consider these intents part of your SEO strategy, but think of them as awareness and branding tactics. AI features such as AI Overviews in Google seek to surface quick answers to queries. It will be much harder to acquire clicks on SERPs where features like this are activated.

If you struggle to understand why well-written content is losing traffic, you should assess whether you wrote hundreds of words to answer a query that only needed 30.

More intents exist, and to complicate matters further, they are not exclusive to each other in a single piece of content. Comparison and discovery intents, for example, often combine in listicles, product comparisons, and titles like “X alternatives to X.”


More reading about user intent:

Continue reading this article 👇


Content Quality Signifiers

While there’s no quantifiable answer to what good content means, there are many ways to evaluate it to ensure it contains key signs of quality.

Google’s content guidelines provide some questions you can ask yourself to objectively assess your content’s quality.

The SEO content mantra is E-E-A-T: Experience, Expertise, Authoritativeness, and Trustworthiness.

Google uses many signals to approximate these concepts and apply these signals to ranking algorithms. To be clear, E-E-A-T are not ranking factors themselves. But they are the concepts that ranking systems attempt to emulate via other signals.

These concepts apply to individual pages and to websites as a whole.

Experience: Are the people creating content directly knowledgeable about the subject matter, and do you demonstrate credible experience?

Expertise: Does your content demonstrate genuine expertise through depth, accuracy, and relevance?

Authoritativeness: Is your website an authoritative source about the topic?

Trust: Is your website trustworthy, considering the information or purposes at hand?

In its content guidelines, Google says this about E-E-A-T:

“Of these aspects, trust is most important. The others contribute to trust, but content doesn’t necessarily have to demonstrate all of them. For example, some content might be helpful based on the experience it demonstrates, while other content might be helpful because of the expertise it shares.”

Understanding these concepts is critical for building a content strategy because publishing content with poor E-E-A-T signals could impact your website as a whole. Google’s language downplays this potential impact, but it’s critical to know that it’s possible. It’s tempting to assume that because a website has high “authority” in a general sense or in one particular area, anything it publishes is considered authoritative. This may not be true.

If you chase traffic by creating content outside your core areas of authority and expertise, that content may perform poorly and drag the rest of your site down.


More reading about E-E-A-T:

Continue reading this article 👇


Creating Effective SEO Content

This article focuses on written content, but don’t neglect multimedia in your content strategy.

The thought process behind content should go a little bit like this:

Audience > Query (Keywords) > Intent > Brief / Outline > Create

You can also express it as a series of questions:

  • Audience: Who is our audience?
  • Query: What are they searching for?
  • Intent: Why?
  • Brief: How can we best assist them?
  • Create: What does exceptional user experience look like?

Keyword Research For Content

Keyword research is a massive topic on its own, so here are some key pieces of advice and a few additional resources:

  • Look at the SERPs for the keywords you target to understand what Google prioritizes, what your competitors are doing, what success looks like, and whether there are gaps you can fill.
  • Cluster related keywords together and develop a content strategy that covers multiple branching areas of a topic deeply.
  • High search volume often means high competition. Allocate your resources carefully between acquiring lower competition positions and fighting for a slice of competitive traffic.
  • Building a robust catalog of content focused on long-tail keywords can help you acquire the authority to compete in more competitive SERPs for related topics.

More reading about keyword research:

Continue reading this article 👇


Briefing SEO Content

Once you have performed your research and identified the intents you must target, it’s time to plan the content.

SEO professionals may not have the required knowledge to create content that demonstrates experience and expertise – unless they’re writing about SEO.

They’re SEO specialists, so if your website is about finance or razor blades, someone else will need to provide the knowledge.

Briefing is critical because it allows the SEO team to communicate all that hard work and research to the person or team creating the content. A successful brief should inform the content creators:

  • The target keyword strategy, with suggestions or a template for the title and subheadings.
  • The purpose of the content for the user: What the user should learn or be able to accomplish.
  • The purpose of the content for the business: Where it falls into the marketing strategy and relevant KPIs.
  • Details such as length, style guide or voice notes, and key pieces of information to be included.

Creating SEO Content

Your research should guide the format of your writing.

Remember, intent impacts the usability of different types of content. Prioritize the information most likely to solve the user’s intent.

You can do this by providing summaries, tables of contents, videos, pictures, skip links, and, most importantly, headings.

Use The Title & Headings To Target Keywords & Organize Information

The title of a page is your primary keyword opportunity. It’s also the first thing users will see on a SERP, which impacts CTR. Match the title to your target query and think about effectively describing the content to entice a click. But don’t misrepresent your page for clicks.

Your primary responsibility in SEO content is to set expectations and then deliver on them. Don’t set if you can’t deliver.

HTML heading formats help users navigate the page by breaking up blocks of text and indicating where certain topics are covered. They’re critical to your on-page SEO, so use your keywords.

Expectations are as true for headings as for titles. Headings should be descriptive and useful. Prioritize setting an expectation for what the user will find on that part of the page and then delivering on that expectation.


More reading about headings:

Continue reading this article 👇


Get To The Point

Whether content should be long or short is subjective to its purpose. All SEO content should be as short as possible while achieving its goals. “As short as possible” could mean 4,000 words.

If you need 4,000 words to achieve your goal, then use them. But don’t add any more than you need.

This is a call to avoid rambling, especially in introductions. Do you really need to cite the projected growth of an industry just to prove it’s worth talking about?

Not unless you’re writing a news story about that growth. Cut that sentence and the link to Statista from your introduction. (No shade, Statista, you rock.)

Features like skip links can also help with this. Give users the option to skim and skip directly to what they need.

Use Internal Links To Connect Your Pages Together & Provide Further Reading

Internal links are the bedrock of SEO content strategies. They are how you organize related pages and guide users around your website. They also spread the SEO value of your pages to the pages they’re connected to.

In the keyword research section, we suggested that you create clusters of keywords and topics to write about – this is why. You build authority by covering a topic in-depth and creating multiple pages exploring it and all its subtopics.

You should link between pages related to one another at contextually important points in the content. You can use this tactic to direct the SEO power of multiple pages to one important page for your strategy or your business.

Contextually relevant links that properly set expectations for what the user will find also contribute to a good site experience.


More reading about internal linking:

Continue reading this article 👇


Use Personal Experiences And Unique Expertise To Stand Out

AI presents numerous challenges for SEOs. Anyone can quickly create content at scale using generative AI tools.

The tools can replicate competitors, synthesize content together from myriad sources, and enable breakneck publishing paces. This poses two core problems:

  • How do you stand out with so much AI content out there?
  • How do you build trust in audiences looking for legitimate experts?

For now, the best answer is to lean into the E-E-A-T principles that Google prioritizes.

  • Tell human stories with your content that demonstrate your experience and expertise.
  • Use Oberstein’s “situational content” principle, mentioned earlier in this article, to connect with your audience’s experiences and needs.
  • Ensure that content is created by verifiable experts, especially if that content involves topics that can impact the audience’s well-being (YMYL.)

SEO Content Is Both A Strategy & An Individual Interaction

It’s easy to focus on what you need from users: what keyword you want to rank for, what you want users to click, and what actions you want them to take.

But all of that falls apart if you don’t honor the individual interaction between your website and a user who needs something.

Audience-first content is SEO content. Content is a core function of SEO because it’s the basis of how humans and algorithms understand your website.


More resources: 


Featured Image: Art_Photo/Shutterstock

Google Confirms It’s Okay To Ignore Spam Scores via @sejournal, @martinibuster

Google’s John Mueller answered a Reddit question about how to lower a website’s spam score. His answer reflected an important insight about third-party spam scores and their relation to how Google ranks websites.

What’s A Spam Score?

A spam score is the opinion of a third-party tool that reviews data like inbound links and on page factors based on whatever the tool developers believe are spam-related factors and signals. While there are a few things about SEO that most people can agree on there is a lot more about SEO that digital marketers dispute.

The reality is that third-party tools use unknown factors to assign a spam score, which reflects how a search engine might use unknown metrics to assess website quality. That’s multiple layers of uncertainty to trust.

Should You Worry About Spam Scores?

The question asked in Reddit was about whether they should be worrying about a third-party spam score and what can be done to achieve a better score.

This is the question:

“My site is less than 6 months old with less than 60 blog posts.

I was checking with some tool it says I have 302 links and 52 referring domains. My worry is on the spam score.

How should I go about reducing the score or how much is the bad spam score?”

Google’s John Mueller answered:

“I wouldn’t worry about that spam score.

The real troubles in your life are apt to be things that never crossed your worried mind, the kind that blindside you at 4 p.m. on some idle Tuesday.”

He then followed up with a more detailed response:

“And to be more direct – Google doesn’t use these spam scores. You can do what you want with them. They’re not going to change anything for your site.

I’d recommend taking the time and instead making a tiny part of your website truly awesome, and then working out what it would take the make the rest of your website like that. This spam score tells you nothing in that regard. Ignore it.”

Spam Scores Tells You Nothing In That Regard

John Mueller is right, third-party spam scores don’t reflect site quality. They’re only opinions based on what the developers of a tool believe, which could be outdated, could be insufficient, we just don’t know because the factors used to calculate third-party spam scores are secret.

In any case, there is no agreement about what ranking factors are, no agreement of what on-page and off-page factors are and even the idea of “ranking factors” is somewhat debatable because nowadays Google uses various signals to determine if a site is trustworthy and relies on core topicality systems to understand search queries and web pages. That’s a world-away from using ranking factors to score web pages. Can we even agree on whether there’s a difference between ranking factors and signals? Where does something like a (missing) quality signal even fit in a third-party spam metric?

Popular lists of 200 ranking factors often contain factual errors and outdated ideas based on decades-old concepts of how search engines rank websites. We’re in a period of time when search engines are somewhat moving past the concepts of “ranking factors” in favor of  core topicality systems for understanding web pages (and search queries) and an AI system called SpamBrain that weeds out low-quality websites.

So yes, Mueller makes a valid point when he advises not to worry about spam scores.

Read the discussion on Reddit:

Is site spam score of 1% bad?

Featured Image by Shutterstock/Krakenimages.com

The race to replace the powerful greenhouse gas that underpins the power grid

The power grid is underpinned by a single gas that is used to insulate a range of high-voltage equipment. The problem is, it’s also a super powerful greenhouse gas, a nightmare for climate change.

Sulfur hexafluoride (or SF6) is far from the most common gas that warms the planet, contributing around 1% of warming to date—carbon dioxide and methane are much more well-known and abundant. However, like many other fluorinated gases, SF6 is especially potent: It traps about 20,000 times more energy than carbon dioxide does over the course of a century, and it can last in the atmosphere for 1,000 years or more.

Despite their relatively small contributions so far, emissions of the gas are ticking up, and the growth rate has been climbing every year. SF6 emissions in China nearly doubled between 2011 and 2021, accounting for more than half the world’s emissions of the gas.

Now, companies are looking to do away with equipment that relies on the gas and searching for replacements that can match its performance. Last week, Hitachi Energy announced it’s producing new equipment that replaces SF6 with other materials. And there’s momentum building to ban SF6 in the power industry, including a recently passed plan in the European Union that will phase out the gas’s use in high-voltage equipment by 2032. 

As equipment manufacturers work to produce alternatives, some researchers say that we should go even further and are trying to find solutions that avoid fluorine-containing materials entirely.

High voltage, high stakes

You probably have a circuit-breaker box in your home—if a circuit gets overloaded, the breaker flips, stopping the flow of electricity. The power grid has something similar, called switchgear.  

The difference is, it often needs to handle something like a million times more energy than your home’s equipment does, says Markus Heimbach, executive vice president and managing director of the high-voltage products business unit at Hitachi Energy. That’s because parts of the power grid operate at high voltages, allowing them to move energy around while losing as little as possible. Those high voltages require careful insulation at all times and safety measures in case something goes wrong.

Some switchgear uses the same materials as your home circuit-breaker boxes—there’s air around it to insulate it. But when it’s scaled up to handle high voltage, it ends up being gigantic and requiring a large land footprint, making it inconvenient for larger, denser cities.

The solution today is SF6, “a super gas, from a technology point of view,” Heimbach says. It’s able to insulate equipment during normal operation and help interrupt current when needed. And the whole thing has a much smaller footprint than air-insulated equipment.

The problem is, small amounts of SF6 leak out of equipment during normal operation, and more can be released during a failure or when old equipment isn’t handled properly. When the gas escapes, its strong ability to trap heat and the fact that it has such a long lifetime makes it a menace in the atmosphere.

Some governments will soon ban the gas for the power industry, which makes up the vast majority of the emissions. The European Union agreed to ban SF6-containing medium-voltage switchgear by 2030, and high-voltage switchgear that uses the gas by 2032. Several states in the US have proposed or adopted limits and phaseouts.

Making changes 

Hitachi Energy recently announced it’s producing high-voltage switchgear that can handle up to 550 kilovolts (kV). The model follows products rated for 420 kV the company began installing in 2023—there are more than 250 booked by customers today, Heimbach says.  

Hitachi Energy’s new switchgear substitutes SF6 with a gas mixture that contains mostly carbon dioxide and oxygen. It works as well as SF6 and is as safe and reliable but with a much lower global warming potential, trapping 99% less energy in the atmosphere, Heimbach says. 

However, for some of its new equipment, Hitachi Energy still uses some C4-fluoronitriles, which helps with insulation, Heimbach says. This gas is present at a low fraction, less than 5% of the mixture, and it’s less potent than SF6, Heimbach says. But C4-fluoronitriles are still powerful greenhouse gases, up to a few thousand times more potent than carbon dioxide. These and other fluorinated substances could soon be in trouble too—chemical giant 3M announced in late 2022 that the company would stop manufacturing all fluoropolymers, fluorinated fluids, and PFAS-additive products by 2025.

In order to eliminate the need for fluorine-containing gases, some researchers are looking into the grid’s past for alternatives. “We know that there’s no one-for-one replacement gas that has the properties of SF6,” says Lukas Graber, an associate professor in electrical engineering at Georgia Institute of Technology.

SF6 is both extremely stable and extremely electronegative, meaning it tends to grab onto free electrons, and nothing else can quite match it, Graber says. So he’s working on a research project that aims to replace SF6 gas with supercritical carbon dioxide. (Supercritical fluids are those at temperatures and pressures so high that distinct liquid and gas phases don’t quite exist.) The inspiration came from equipment that used to use oil-based materials—instead of trying to grab electrons like SF6, supercritical carbon dioxide can basically slow them down.

Graber and his research team received project funding from the US Department of Energy’s Advanced Research Projects Agency for Energy. The first small-scale prototype is nearly finished, he adds, and the plan is to test out a full-scale prototype in 2025.

Utilities are known for being conservative, since the safety and reliability of the electrical grid have high stakes, Hitachi Energy’s Heimbach says. But with more SF6 bans coming, they’ll need to find and adopt solutions that don’t rely on the gas.