Google Expands ‘About This Image’ To More Platforms via @sejournal, @MattGSouthern

Google has announced the expansion of its “About this image” feature to additional platforms, including Circle to Search and Google Lens.

This move gives people more access points to obtain context about images they encounter online.

New Access Points

The “About this image” tool, which offers information about an image’s origins and usage, is now available through:

  1. Circle to Search: A feature on select Android devices
  2. Google Lens: Available in the Google app on both Android and iOS

Functionality & Usage

You can access the feature through different methods depending on the platform:

For Circle to Search:

  • Activate the feature by long-pressing the home button or navigation bar
  • Circle or tap the image on the screen
  • Swipe up on search results and select the “About this image” tab

For Google Lens:

  • Screenshot or download the image
  • Open the Google app and use the Lens icon
  • Select the image and tap the “About this image” tab

Information Provided

The tool offers various details about images, including:

  • How other websites use and describe the image
  • Available metadata
  • Identification of AI-generated images with specific watermarks

Availability & Language Support

“About this image” is available in 40 languages globally, including French, German, Hindi, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese.

Broader Context

This expansion comes at a time when digital literacy and the ability to verify online information are increasingly important.

However, it’s worth noting that while such tools can be helpful, they’re not infallible.

Users are still encouraged to critically evaluate information and consult multiple sources when verifying claims or images online.

How Does This Help You?

Here’s how the expansion of Google’s “About this image” feature can help you:

  • Quickly verify claims associated with images.
  • Understand where an image originated and how it’s been used across the web.
  • This tool can help you distinguish between human-created and AI-created visual content.
  • It provides a quick way for students, journalists, and researchers to gather context and potential sources related to an image.
  • Understanding an image’s history and context can help protect you from visual manipulation tactics often used in scams.

Related Algorithm Update: Combating Explicit Deepfakes

Today, Google announced an algorithm update targeting explicit deepfakes in search results.

Key aspects of this update include:

  1. Improved Content Removal: When a removal request is approved, the system will attempt to filter similar explicit results across related searches for the affected individual.
  2. Ranking Adjustments: The search algorithm has been modified to reduce the visibility of explicit fake content in many searches. For queries seeking such content and including people’s names, Google will prioritize non-explicit content, such as news articles.
  3. Site-Wide Impact: Websites with numerous pages removed due to fake explicit imagery may see changes in their overall search rankings.

Google reports that these changes have reduced exposure to explicit image results for specific queries, decreasing over 70% for targeted searches.

Google’s doing two things at once: making it easier to spot fake images and cracking down on deepfakes algorithmically.

These updates demonstrate Google’s commitment to keeping search results safe and trustworthy as the web changes.


Featured Image: Screenshot from blog.google.com, July 2024. 

Navigating A Cookieless Future: PPC Strategies For Privacy-First World

The digital advertising landscape is constantly changing, and a recent announcement from Google has shifted things yet again.

On July 22, 2024, Google made a surprising U-turn on its long-standing plan to phase out third-party cookies in Chrome.

This decision comes after years of back-and-forth between Google, regulatory bodies, and the advertising industry.

Advertisers have relied on third-party cookies – small pieces of code placed on users’ browsers by external websites – to track online behaviour, build detailed user profiles, and serve targeted ads across the web.

The initial plan to remove these cookies was driven by growing privacy concerns and regulations such as Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) in the US.

However, Google’s recent announcement doesn’t mean the death of the cookieless future has been permanently averted. Instead, it signals a more gradual and user-choice-driven transition, allowing us to keep cookies for a little bit longer.

Google now plans to introduce a new experience in Chrome that will enable users to make informed choices about their web browsing privacy, which they can adjust at any time, thus giving control back to the user.

This change in approach emphasizes the ongoing tension between privacy concerns and the need for effective digital advertising.

While third-party cookies may stick around longer than initially expected, the trend toward more privacy-focused solutions continues. As such, it’s crucial for businesses running PPC campaigns to stay informed and adaptable.

In this article, we’ll examine the debate surrounding the elimination of cookies for enhanced privacy, explore the potential alternatives to third-party cookies, and discuss how these changes might shape the future of PPC campaigns in an evolving digital landscape.

Should We Get Rid Of Cookies For Enhanced Privacy?

The digital advertising industry has been debating this question for years.

Despite Google’s recent decision to keep third-party cookies in Chrome, the overall direction of the industry is moving towards more privacy-focused solutions.

Other major browsers, including Safari and Firefox, have already implemented restrictions on third-party cookies, underlining the industry trend toward increased privacy for users.

Of course, whether cookieless is the best path to greater privacy is still debated.

Yes, this would reduce some forms of tracking on the one hand, but on the other hand, it will spur efforts toward arguably even more invasive tracking methods.

Cookies also store a couple of user-friendly purposes like login information and user preferences.

As the industry continues to talk about these questions, one thing is obvious: the future of digital advertising will be a dance between user privacy and effective ad targeting.

However, that may be the case. Only time will tell whether it is in accepting the eventual phasing out of third-party cookies or developing new technologies that make the use of privacy user-friendly in the end.

What Options Are There To Replace Third-Party Cookies?

The urgency to find replacements halted after Google announced that Chrome would retain third-party cookies while adding more controls for users.

However, Google is still moving forward with its Privacy Sandbox initiative, which aims to develop privacy-preserving alternatives to third-party cookies.

The Privacy Sandbox is a collective name given to ongoing collaborative efforts to create new technologies designed to protect user privacy while ensuring digital ads are as effective as possible.

For some time now, Google has announced a raft of APIs around this, including the Topics API, Protected Audience API, and Attribution Reporting API.

These technologies are designed to offer a subset of the functionality of third-party cookies in a far more privacy-friendly manner.

While Google decided to retain third-party cookies for the time being, it is worth noting that the company is still investing in these alternative technologies.

This reflects the fact that the trend in the long run is toward a more privacy-centric Web, even though the transition might be happening at a slightly slower pace than initially planned.

In mid-2023, Google announced the release of 6 new APIs for Chrome version 115, designed to replace some functionalities of third-party cookies:

  • The Topics API allows the browser to show ads based on broad interest categories or “topics” that users care about without tracking them individually. For example, topics could include fitness, travel, books, and literature.
  • Protected Audience API enables interest-based advertising by allowing an “interest group owner” to ask a user’s Chrome browser to add a membership for a specific interest group.
  • Attribution Reporting API helps advertisers understand the most effective ads without revealing individual user data.
  • private-aggregation”>Private Aggregation API works with aggregated data from the Topics API and Shared Storage, similar to Attribution Reporting.
  • Shared Storage API allows advertisers to show relevant ads without accessing visitors’ personal information.
  • Fenced Frames API enables websites to display ads in a privacy-safe manner without tracking or collecting visitor information.

It’s important to note that these APIs are still evolving, and more may be developed in the future.

The UK’s Competition and Markets Authority (CMA) has raised concerns about various aspects of these APIs, including user consent interfaces, the potential for abuse, and impacts on competition in the digital advertising market.

As a digital marketer, it’s crucial to stay informed about these developments and be prepared to adapt your strategies as these new technologies roll out.

While they aim to provide privacy-friendly alternatives to third-party cookies, they will likely require new approaches to targeting, measuring, and optimizing your PPC campaigns.

First-Party Data

As third-party cookies slowly become a thing of the past, first-party data becomes very important. First-party data is information you collect directly from your audience or customers, including the following:

  • Website or app usage patterns.
  • Purchase history.
  • Newsletter subscriptions with email.
  • Reactions and feedback forms from customers, online surveys.
  • Social media engagement with your brand.

First-party data is collected based on the users’ consent and falls under the Utility Standards of privacy regulations.

It also provides direct insights about your customers and their activities towards your brand, enabling more accurate and relevant targeting.

Alternative Tracking Methods

As the industry moves away from third-party cookies, several new tracking and measurement methods are emerging:

Consent Mode V2: A feature that adjusts Google tags based on user consent choices. When a user doesn’t consent to cookies, Consent Mode automatically adapts tag behavior to respect the user’s preference while still providing some measurement capabilities. This approach gives users more control over their data and its use, balancing user privacy and advertisers’ data needs.

Enhanced Conversions: Implementing this improves conversion measurement accuracy using first-party data. It uses hashed customer data like email addresses to connect online activity with actual conversions, even when cookies are limited. By utilizing secure hashing to protect user data while improving measurement, Enhanced Conversions offers a privacy-focused solution for tracking conversions.

Server-Side Tracking: This method collects data from the user’s browser and sends it to the server. Instead of placing tracking pixels or scripts on the user’s browser, data is collected and processed on the server side. This method reduces user data exposure in the browser, improving security and website performance while allowing for effective tracking.

Customer Lists: This utilizes first-party data for audience targeting and remarketing. Advertisers can upload hashed lists of customer information, like email addresses, to platforms for targeting or measurement purposes. This approach relies on data that customers have directly provided to the business rather than third-party tracking, making it a more privacy-conscious method of audience targeting.

Offline Conversion Tracking: OCT connects online ad interactions with offline conversions. It uses unique identifiers to link clicks on online ads to offline actions such as phone calls or in-store purchases. This method provides a more holistic view of the customer journey without relying on extensive online tracking, bridging the gap between digital advertising and real-world conversions.

Small businesses, with their adaptability, can navigate these changes.

Though no single method would be a perfect replacement for the functionality of third-party cookies, together, these alternatives can supply similar functionality for advertisers and solve the privacy fault lines that brought about their deprecation.

Advertisers are likely to need this combination of methods to achieve desired advertising and measurement goals in the era beyond cookies.

Long-Term Strategies For Small Businesses

1. First-Party Data Collection Strategy

Shift your focus to collecting data directly from your customers:

  • Add sign-up forms against email capture on a website.
  • Create loyalty programs or share valuable content in return for information about your customers.
  • Use tools like Google Analytics to trace user interactivity on a website.
  • Customer feedback surveys to understand their view about a business and learn more about your customers.

This process will be successful by building trust:

  • Be open and transparent about how you collect and make use of the customer’s data.
  • Communicate and offer your customers whatever value they get in return for their information.
  • Give customers an easy way out and allow them an opt-out option. Customers must have control over their data.
  • Provide regular training to raise employee awareness about privacy regulations and best practices for handling customer data.

Invest in a robust CRM system to help organize and manage first-party data effectively.

2. Diversify Your Marketing Channels

Businesses should not keep all the eggs in one basket.

Yes, the need for PPC will always be there; however, in light of this drastic step, it is imperative now to diversify marketing efforts within/between:

Diversification allows you to reach customers through numerous touchpoints and reduces your reliance upon any platform or technology.

Remember that the rule of seven states that a prospect needs to “hear” (or see) the brand’s message at least seven times before they take action to buy that product or service.

3. Embrace Contextual Targeting

Contextual targeting is a kind of targeting that displays advertisements by webpage content and not by the profiles of users. How to work with this approach:

  • Choose relevant, meaningful keywords and topics aligned with your products or services.
  • Choose placements where your target audience will most likely be viewing.
  • Produce several ad creatives specifically for various contexts to prompt relevance.

Pros Of Contextual Targeting

  • Privacy-friendly since it does not utilize personal data.
  • When well done, targeting people actively interested in connected subjects is remarkably effective.

Cons Of Contextual Targeting

  • Accuracy in targeting audiences might be lower than the audience-based targeting methods.
  • Requires planning and analysis of content.

4. Use Tracking Solutions With A Focus On Privacy

Next comes server-side tracking and conversion APIs (refer to this article’s Alternative Tracking Methods section for more information). These methods shift data collection from the user’s browser to your server.

Pros

  • Improved data accuracy: Server-side tracking can capture events that client-side tracking might miss due to ad blockers or browser restrictions.
  • Cross-device tracking capabilities: Server-side solutions can more easily track user interactions across different devices and platforms.
  • Future-proofing: As browser restrictions on cookies and client-side tracking increase, server-side solutions will likely remain more stable and effective in the long term.
  • Ability to enrich data: Server-side tracking allows data integration from multiple sources before sending it to analytics platforms, potentially providing richer insights.

Cons

  • Increased complexity: Server-side tracking and conversion APIs are more technically complex than traditional client-side methods, potentially requiring specialized skills or resources to implement and maintain.
  • Potential latency issues: Server-side tracking may introduce slight delays in data processing, which could impact real-time analytics or personalization efforts.
  • Ongoing maintenance: Server-side solutions often require more regular updates and maintenance to ensure they remain effective and compliant with evolving privacy regulations.

These solutions may become overly technical. You can also partner with a developer or an agency to ensure their implementation.

5. Investment In Creative Optimization

With reduced accuracy in targeting, your ad creative is more crucial than ever:

  • Design creative, eye-catching visuals to blockbuster visuals.
  • Be bold, clear in your ad copy, and fast in delivering your value proposition.
  • Test different ad formats to find out what will make a connection with people.
  • Run A/B testing over ad variations, images, headlines, or CTAs.

6. Embrace Privacy-First Solutions

Track the numerous efforts underway within Google’s Privacy Sandbox and other fast-developing privacy-centric solutions.

Be prepared to test these tools and to scale up their adoption upon release to stay ahead of the curve.

For now, enable Enhanced Conversions inside Google Ads to deliver a better model of your return on ad spend (ROAS) using hashed first-party data.

7. Train And Educate Employees End

Provide continuous training to your workforce:

  • Educate your employees about data privacy and security.
  • Keep them updated with all the latest privacy regulations and their impact on businesses.
  • Conduct training on best practices in collecting, storing, and using customer data.
  • Embed a culture of privacy awareness across the organization.

8. Collaborate With Experts

Navigating a cookieless future can be tricky.

A PPC agency or consultant can help you with the latest changes and best practices, implement advanced tracking and targeting solutions, and optimize your campaigns in this new landscape.

When choosing an agency:

  • Check for experience in privacy-first campaigns.
  • Ask about their approach to first-party data and alternative targeting methods.
  • They have a record for converting to changes in the industry.

Start Now And Be Flexible As Digital Advertising Changes

Google’s decision to keep third-party cookies in Chrome while adding more user controls represents a significant shift in the digital advertising landscape.

While this move will definitely grant a bit of breathing room to the advertisers who are heavily reliant on third-party cookies, it doesn’t change the overall trend towards user privacy and control over personal data.

The strategies outlined in this article – focusing on first-party data collection, diversifying marketing channels, embracing contextual targeting, and investing in privacy-focused solutions – remain relevant for long-term success in digital advertising.

These approaches will help you navigate the current landscape and prepare you for a future where user privacy is increasingly prioritized.

Yes, third-party cookies are sticking around longer than initially expected, but the push to find more privacy-friendly advertising solutions still continues.

By implementing these strategies now, you’ll be better positioned to adapt to further changes down the road, whether they come from regulatory bodies, browser policies, or changing consumer expectations.

The time to start future-proofing is now. Start by auditing your existing strategies, building first-party data assets, and testing new targeting and measurement capabilities.

Stay informed about developments in privacy-preserving technologies like Google’s Privacy Sandbox, and be prepared to test and implement these new tools when they become available.

Taking a proactive, strategic approach that puts the user’s privacy and trust first ensures that your PPC campaigns will continue to thrive. The future of digital advertising may be uncertain.

Still, with the appropriate strategies and respect for users’ privacy, you can turn these challenges into opportunities for growth and innovation.

More resources: 


Featured Image: BestForBest/Shutterstock

OpenAI has released a new ChatGPT bot that you can talk to

OpenAI is rolling out an advanced AI chatbot that you can talk to. It’s available today—at least for some. 

The new chatbot represents OpenAI’s push into a new generation of AI-powered voice assistants in the vein of Siri and Alexa, but with far more capabilities to enable more natural, fluent conversations. It is a step in the march to more fully capable AI agents. The new ChatGPT voice bot can tell what different tones of voice convey, responds to interruptions, and reply to queries in real time. It has also been trained to sound more natural and use voices to convey a wide range of different emotions.

The voice mode is powered by OpenAI’s new GPT-4o model, which combines voice, text, and vision capabilities. To gather feedback, the company is initially launching the chatbot to a “small group of users” paying for ChatGPT Plus, but it says it will make the bot available to all ChatGPT Plus subscribers this fall. A ChatGPT Plus subscription costs $20 a month. OpenAI says it will notify customers who are part of the first rollout wave in the ChatGPT app and provide instructions on how to use the new model.   

The new voice feature, which was announced in May, is being launched a month later than originally planned because the company said it needed more time to improve safety features, such as the model’s ability to detect and refuse unwanted content. The company also said it was preparing its infrastructure to offer real-time responses to millions of users. 

OpenAI says it has tested the model’s voice capabilities with more than 100 external red-teamers, who were tasked with probing the model for flaws. These testers spoke a total of 45 languages and represented 29 countries, according to OpenAI.

The company says it has put several safety mechanisms in place. In a move that aims to prevent the model from being used to create audio deepfakes, for example, it has created four preset voices in collaboration with voice actors. GPT-4o will not impersonate or generate other people’s voices.  

When OpenAI first introduced GPT-4o, the company faced a backlash over its use of a voice called “Sky,” which sounded a lot like the actress Scarlett Johansson. Johansson released a statement saying the company had reached out to her for permission to use her voice for the model, which she declined. She said she was shocked to hear a voice “eerily similar” to hers in the model’s demo. OpenAI has denied that the voice is Johansson’s but has paused the use of Sky. 

The company is also embroiled in several lawsuits over alleged copyright infringement. OpenAI says it has adopted filters that recognize and block requests to generate music or other copyrighted audio. OpenAI also says it has applied the same safety mechanisms it uses in its text-based model to GPT-4o to prevent it from breaking laws and generating harmful content. 

Down the line, OpenAI plans to include more advanced features, such as video and screen sharing, which could make the assistant more useful. In its May demo, employees pointed their phone cameras at a piece of paper and asked the AI model to help them solve math equations. They also shared their computer screens and asked the model to help them solve coding problems. OpenAI says these features will not be available now but at an unspecified later date. 

3-Tiered Index Hints SEO Link Value

Leaked documentation from Google suggests that where the search giant stores a link to a web page determines, at least in part, the page’s authority.

Google stores its index of web pages on a three-tiered hardware system, each with different processing speeds. The leak from Google’s engineering documents, first reported by consultant Mike King, suggests a link’s authority depends on its storage tier — faster tiers equate to more authority.

“It was something we already suspected, but data in the leak suggests that the value of a link relates to where the page that the link exists on is stored in Google’s index,” says consultant Barry Adams, who helps publishers optimize articles for the Google News ecosystem, including Top Stories and Discover. He also noted that Google’s News algorithm processes content much faster than universal search.

Photo of computer RAM sticks on a motherboard

Google stores its index of web page links across three hardware tiers. Links on the RAM tier likely have higher authority.

“When an article is published, Google wants to show it in Top Stories as soon as possible because people want the latest news and updates. A lot of the processes that Google would perform as part of regular indexing and ranking of content don’t necessarily apply to News. To process News search results, Google has to take a few shortcuts along the way,” says Adams.

Ted Kubaitis at SEO Tool Lab has tested how quickly Google recognizes and incorporates new links into its rankings. So far, Google has taken at least three weeks to recognize and apply the authority of new backlinks to the rankings of web pages. News cycles are much faster.

Google upended Yahoo years ago as the leading search engine by treating links as recommendations by web users and ranking pages accordingly. The more recommendations or links a web page has from other domains, the higher that page tends to rank. Google calls this “citation indexing.” The logic is circular. If all other ranking factors are equal, a link from a web page with many trustworthy links pointing to it has more authority than a page with no links.

Storage Tiers

Google’s index of web pages resides on three hardware tiers:

  • RAM: This is the fastest and most valuable tier, reserved for popular and highly ranked web pages. RAM storage includes popular News stories, which suggests backlinks from News articles pass more authority than those from niche blogs or low-traffic sites.
  • Solid state drives: The middle tier offers faster retrieval than traditional rotating hard drives but is slower than RAM. Pages stored here are less frequently accessed than those in RAM but still hold significant value.
  • Hard disk drives: The slowest tier, rotating hard disks, stores older or less frequently accessed pages. These pages are less valuable in terms of link value.

News Publishers

Engagement is a long-suspected ranking signal, especially in Google’s Discover feed. The leak data confirms what I and other SEO consultants already surmised: Google’s ranking algorithm likely tracks user interaction metrics.

That means search-result listings must generate clicks and engagement to last. To secure a top ranking, pages must appear in search results and receive clicks from users who then stick around for at least a few seconds. If they bounce (leave right away) and visit another listing for longer, a page won’t likely maintain its rankings.

News publishers with rigid paywalls experience low engagement metrics, resulting in lesser rankings in Google Discover feeds and thus fewer external citations.

To mitigate, many publishers implement a “leaky” paywall wherein visitors can view a few free pages before subscribing. This approach encourages engagement, enhancing the content’s potential inclusion in Top Stories and Discover feeds. However, excessive ads, pop-up overlays, and interstitials can boost bounce rates and lower engagement signals.

SEO Implications

For search optimizers, this three-tiered revelation necessitates a shift in link-building tactics. Traditional methods, such as acquiring links from niche sites or older content likely stored on less valuable tiers, may have a diminishing impact on rankings.

Thus link-building tactics should prioritize web pages stored on Google’s RAM tier. This means securing mentions in high-traffic news articles and publications, which pass a higher link value.

Barry Adams states, “You need creative link-building campaigns with creative content marketing to generate news stories. You also need what I call serious PR to try and get press coverage in news stories.”

Google’s recent addition of Site Reputation Abuse to its web spam policies — “when third-party pages are published with little or no first-party oversight or involvement” — could eventually result in the search engine ignoring press release pickups, particularly if absentee publishers aggregate company news releases on a no-index subdomain.

As of this writing, I still see positive results from search-engine-optimized press releases, whether they generate original news coverage or get aggregated on a publisher’s subdomain. But publishers and press release distributors appear to be scrutinizing the content much more diligently.

New Ecommerce Tools: July 30, 2024

Every week we publish a rundown of new products from companies offering services to ecommerce and omnichannel merchants. This installment includes updates on AI-powered content creation, automated deliveries, shoppable video, website speed, logistics, and sustainable ecommerce packaging.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants: July 30, 2024

Vayu Robotics debuts an on-road delivery robot. Vayu Robotics has released an on-road delivery robot that combines AI foundation models with low-cost passive sensors, eliminating the need for light and ranging detection. Vayu’s delivery robot operates autonomously without pre-mapping the roads it intends to drive on. It can navigate inside stores, on city streets, and unload packages on driveways or porches, carrying up to 100 pounds at under 20 mph. According to the company, the delivery robot offers a cost-effective, safe, and reliable delivery system.

Home page of Vayu Robotics

Vayu Robotics

Addlly AI launches Shopify AI Writer for content creation. Singapore-based generative AI company Addlly AI has launched a zero-prompt AI SEO Writer app for Shopify. Users can select products from their Shopify store and review and select from suggested trending blog topics tailored to their chosen products. The AI writer then generates a full ecommerce article with automatic internal product linking. Users can review and edit the content, adding product images as needed. Additionally, the app generates ready-to-post social media content.

Website Speedy launches app on BigCommerce to enhance site performance. Website Speedy, a provider of performance optimization tools, has launched on the BigCommerce app store, providing a solution to enhance website speed, search engine rankings, and user experience. The Website Speedy app offers a suite of features to maximize website performance, including asynchronous loading, resource prefetching, third-party script management, and detailed performance reports.

Shirofune releases Improvement Suggestion feature for Amazon Ads. Shirofune, a digital advertising automation management tool, has announced an enhanced Improvement Suggestion feature that proposes optimal keyword and search query settings for Amazon Ads. Shirofune’s Improvement Suggestion identifies keywords and search queries that increase costs without delivering the desired results, automatically listing those that need to be excluded or reviewed. It also recommends additional keywords to drive better performance. Additionally, the feature recommends excluding high-cost keywords and incorporating negative keywords.

Home page of Shirofune

Shirofune

Kinesso Commerce and Vudoo partner to launch ShopNow solution. Kinesso Commerce, IPG Mediabrands’ commerce division, has partnered with technology and content commerce company Vudoo to launch ShopNow, an always-on commerce media solution. ShopNow combines Kinesso Commerce’s activated media with Vudoo’s advanced shoppable ad technology, enabling content to be shoppable all the time across the open web and leading commerce partners, including Amazon. The global partnership will launch in Australia.

Fast Simon launches merchandising aI Assistant. Fast Simon, a provider of AI-powered shopping optimization, has launched a merchandising AI assistant. Merchants can chat with an AI assistant to help create an assortment and display for shoppers in real time based on product attributes, availability, performance metrics, and more. According to Fast Simon, the AI assistant reduces the time and effort required for manual and rule-based merchandising while leaving merchants in control of what shoppers see.

Cart.com launches Constellation warehouse and transportation management systems. Cart.com, a provider of unified commerce and logistics software and services, has launched Constellation WMS and Constellation TMS, making the company’s proprietary warehouse and transportation management systems available as standalone offerings. These modules join Constellation OMS, Cart.com’s distributed order management system, already available. The products can be used independently or combined for receiving, inventory management, picking and packing, rate shopping, shipping, returns, and labor management.

Home page of Cart.com

Cart.com

Creator commerce platform LTK launches video-first shopping app. LTK, a creator commerce platform, is launching a new LTK app video experience to help creators reach shoppers on the platform. LTK’s app includes Full-Screen Continuous Video (a shoppable video feed showcasing a creator’s videos back-to-back) and Daily Drops (a quick way for shoppers to view the latest shoppable video content published in the last 24 hours from the creators they follow).

Mondi partners with CMC Packaging Automation for sustainable ecommerce packaging. Mondi, a supplier of sustainable paper and packaging, has partnered with CMC Packaging Automation, a provider of automated packaging on-demand solutions. The collaboration will focus on developing packaging solutions for enhanced product quality and sustainability. It aims to meet the demand for sustainable ecommerce paper and packaging solutions, building synergies between CMC’s packaging on-demand technology and Mondi’s sustainable kraft paper products.

AI and ecommerce specialist Ocula raises £3.25 million in Series A funding. Belfast-based AI startup Ocula has raised £3.25 million ($4.17 million) in Series A funding, led by Praetura Ventures, with additional investments from Castelnau Group and Lloyds Banking Group. Ocula specializes in leveraging AI to optimize ecommerce performance. Its primary product, Ocula Boost, uses AI to enhance product pages, improve SEO, and generate insights that help retailers compete more effectively. The startup has secured over 25 major retail clients.

India-based Mystore unveils AI-powered semantic search for ONDC. Mystore, an India marketplace connected to the country’s Open Network for Digital Commerce (ONDC), has added an AI-powered semantic search designed to enhance seller conversions through improved buying experience. This addition to Mystore’s offerings for ONDC sellers and buyers aims to provide granular search results through exact query interpretation. Mystore’s semantic search technology is designed to benefit sellers and buyers through accurate query interpretation, tailored product recommendations, quick search results for queries, optimized search performance for complex queries, and higher conversion rates.

Mystore home page

Mystore

AI Integration in Marketing: Strategic Insights For SEO & Agency Leaders via @sejournal, @CallRail

This edited extract is from Data Storytelling in Marketing by Caroline Florence ©2024 and is reproduced and adapted with permission from Kogan Page Ltd.

Storytelling is an integral part of the human experience. People have been communicating observations and data to each other for millen­nia using the same principles of persuasion that are being used today.

However, the means by which we can generate data and insights and tell stories has shifted significantly and will continue to do so, as tech­nology plays an ever-greater role in our ability to collect, process, and find meaning from the wealth of information available.

So, what is the future of data storytelling?

I think we’ve all talked about data being the engine that powers business decision-making. And there’s no escaping the role that AI and data are going to play in the future.

So, I think the more data literate and aware you are, the more informed and evidence-led you can be about our decisions, regardless of what field you are in – because that is the future we’re all working towards and going to embrace, right?

It’s about relevance and being at the forefront of cutting-edge technology.

Sanica Menezes, Head of Customer Analytics, Aviva

The Near Future Scenario

Imagine simply applying a generative AI tool to your marketing data dashboards to create audience-ready copy. The tool creates a clear narrative structure, synthesized from the relevant datasets, with actionable and insightful messages relevant to the target audience.

The tool isn’t just producing vague and generic output with question­able accuracy but is sophisticated enough to help you co-author technically robust and compelling content that integrates a level of human insight.

Writing stories from vast and complex datasets will not only drive efficiency and save time, but free up the human co-author to think more creatively about how they deliver the end story to land the message, gain traction with recommendations and influence decisions and actions.

There is still a clear role for the human to play as co-author, including the quality of the prompts given, expert interpretation, nuance of language, and customization for key audiences.

But the human co-author is no longer bogged down by the complex and time-consuming process of gathering different data sources and analysing data for insights. The human co-author can focus on synthesizing findings to make sense of patterns or trends and perfect their insight, judgement, and communication.

In my conversations with expert contributors, the consensus was that AI would have a significant impact on data storytelling but would never replace the need for human intervention.

This vision for the future of storytelling is (almost) here. Tools like this already exist and are being further improved, enhanced, and rolled out to market as I write this book.

But the reality is that the skills involved in leveraging these tools are no different from the skills needed to currently build, create, and deliver great data stories. If anything, the risks involved in not having human co-authors means acquiring the skills covered in this book become even more valuable.

In the AI storytelling exercise WINconducted, the tool came up with “80 per cent of people are healthy” as its key point. Well, it’s just not an interesting fact.

Whereas the humans looking at the same data were able to see a trend of increasing stress, which is far more interesting as a story. AI could analyse the data in seconds, but my feeling is that it needs a lot of really good prompting in order for it to seriously help with the storytelling bit.

I’m much more positive about it being able to create 100 slides for me from the data and that may make it easier for me to pick out what the story is.

Richard Colwell, CEO, Red C Research & Marketing Group

We did a recent experiment with the Inspirient AI platform taking a big, big, big dataset, and in three minutes, it was able to produce 1,000 slides with decent titles and design.

Then you can ask it a question about anything, and it can produce 110 slides, 30 slides, whatever you want. So, there is no reason why people should be wasting time on the data in that way.

AI is going to make a massive difference – and then we bring in the human skill which is contextualization, storytelling, thinking about the impact and the relevance to the strategy and all that stuff the computer is never going to be able to do.

Lucy Davison, Founder And CEO, Keen As Mustard Marketing

Other Innovations Impacting On Data Storytelling

Besides AI, there are a number of other key trends that are likely to have an impact on our approach to data storytelling in the future:

Synthetic Data

Synthetic data is data that has been created artificially through computer simulation to take the place of real-world data. Whilst already used in many data models to supplement real-world data or when real-world data is not available, the incidence of synthetic data is likely to grow in the near future.

According to Gartner (2023), by 2024, 60 per cent of the data used in training AI models will be synthetically generated.

Speaking in Marketing Week (2023), Mark Ritson cites around 90 per cent accuracy for AI-derived consumer data, when triangulated with data generated from primary human sources, in academic studies to date.

This means that it has a huge potential to help create data stories to inform strategies and plans.

Virtual And Augmented Reality

Virtual and augmented reality will enable us to generate more immersive and interactive experiences as part of our data storytelling. Audiences will be able to step into the story world, interact with the data, and influence the narrative outcomes.

This technology is already being used in the world of entertainment to blur the lines between traditional linear television and interactive video games, creating a new form of content consumption.

Within data storytelling we can easily imagine a world with simulated customer conversations, whilst navigating the website or retail environment.

Instead of static visualizations and charts showing data, the audience will be able to overlay data onto their physical environment and embed data from different sources accessed at the touch of a button.

Transmedia Storytelling

Transmedia storytelling will continue to evolve, with narratives spanning multiple platforms and media. Data storytellers will be expected to create interconnected storylines across different media and channels, enabling audiences to engage with the data story in different ways.

We are already seeing these tools being used in data journalism where embedded audio and video, on-the-ground eyewitness content, live-data feeds, data visualization and photography sit alongside more traditional editorial commentary and narrative storytelling.

For a great example of this in practice, look at the Pulitzer Prize-winning “Snow fall: The avalanche at Tunnel Creek (Branch, 2012)” that changed the way The New York Times approached data storytelling.

In the marketing world, some teams are already investing in high-end knowledge share portals or embedding tools alongside their intranet and internet to bring multiple media together in one place to tell the data story.

User-Generated Content

User-generated content will also have a greater influence on data storytelling. With the rise of social media and online communities, audiences will actively participate in creating and sharing stories.

Platforms will emerge that enable collaboration between storytellers and audiences, allowing for the co-creation of narratives and fostering a sense of community around storytelling.

Tailoring narratives to the individual audience member based on their preferences, and even their emotional state, will lead to greater expectations of customization in data storytelling to enhance engagement and impact.

Moving beyond the traditional “You said, so we did” communication with customers to demonstrate how their feedback has been actioned, user-generated content will enable customers to play a more central role in sharing their experiences and expectations

These advanced tools are a complement to, and not a substitution for, the human creativity and critical thinking that great data storytelling requires. If used appropriately, they can enhance your data storytelling, but they cannot do it for you.

Whether you work with Microsoft Excel or access reports from more sophisticated business intelligence tools, such as Microsoft Power BI, Tableau, Looker Studio, or Qlik, you will still need to take those outputs and use your skills as a data storyteller to curate them in ways that are useful for your end audi­ence.

There are some great knowledge-sharing platforms out there that can integrate outputs from existing data storytelling tools and help curate content in one place. Some can be built into existing plat­forms that might be accessible within your business, like Confluence.

Some can be custom-built using external tools for a bespoke need, such as creating a micro-site for your data story using WordPress. And some can be brought in at scale to integrate with existing Microsoft or Google tools.

The list of what is available is extensive but will typically be dependent on what is available IT-wise within your own organization.

The Continuing Role Of The Human In Data Storytelling

In this evolving world, the role of the data storyteller doesn’t disap­pear but becomes ever more critical.

The human data storyteller still has many important roles to still play, and the skills necessary to influence and engage cynical, discerning, and overwhelmed audiences become even more valuable.

Now that white papers, marketing copy, internal presentations, and digital content can all be generated faster than humans could ever manage on their own, the risk of informa­tion overload becomes inevitable without a skilled storyteller to curate the content.

Today, the human data storyteller is crucial for:

  • Ensuring we are not telling “any old story” just because we can and that the story is relevant to the business context and needs.
  • Understanding the inputs being used by the tool, including limitations and potential bias, as well as ensuring data is used ethically and that it is accurate, reliable, and obtained with the appropriate permissions.
  • Framing queries appropriately in the right way to incorporate the relevant context, issues, and target audience needs to inform the knowledge base.
  • Cross-referencing and synthesizing AI-generated insights or synthetic data with human expertise and subject domain knowledge to ensure the relevance and accuracy of recommendations.
  • Leveraging the different VR, AR, and transmedia tools available to ensure the right one for the job.

To read the full book, SEJ readers have an exclusive 25% discount code and free shipping to the US and UK. Use promo code SEJ25 at koganpage.com here.

More resources: 


Featured Image: PopTika/Shutterstock

Search GPT – Can Search GPT Disrupt Google Search? via @sejournal, @Kevin_Indig

Despite initial concerns, Chat GPT has not replaced search. Q2 record earnings show Google Search does better than ever. That’s why OpenAI’s new search engine, Search GPT, makes only sense after a second look.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

$5b USD

Why would OpenAI launch a search engine if its main product poses one of the biggest threats to Google?

Image Credit: Kevin Indig

Searches for “LLM Search” are growing, but it’s not consumer demand that pulls OpenAI in that direction. There are six good reasons (in order of importance):

1/ OpenAI’s problem is that Chat GPT is not perceived as a search engine despite similar capabilities, so the company positions Search GPT as a direct Google alternative to gain more Search market share.

Rumors about launching a search engine just before Google I/O in 2024 and the impact of the actual announcement on Alphabet’s stock show the ambition to compete directly.

The Information reports that OpenAI loses $5b a year in expenses.1 Just capturing 3% of Google’s $175b Search business would allow OpenAI to recoup expenses.

Image Credit: Kevin Indig

Searches for ChatGPT on Google are growing so much, they get close to searches for “Google”. They’ve already surpassed searches for other search engines by a lot.

To be fair, people search less for “Google” on Google (maybe in their browser bar to get to the Google homepage), and traffic numbers between Google (465b, according to Similarweb) and Chat GPT (660M) are still magnitudes apart.

Image Credit: Kevin Indig

OpenAI has a strategic advantage over Google: Search GPT can provide a very different, maybe less noisy, user experience than Google because it’s not reliant on ad revenue. In any decision regarding Search, Google needs to take ads into account.

2/ OpenAI crawls the web for training data and already has half the ingredients for a search engine on the table. Consumers are already familiar with the concept of a search engine, making adoption more likely.

I have no doubt that OpenAI will see a lot of curious sign-ups for Search GPT but the bigger challenge will be retaining users.

It’s also important to point out that the market hasn’t found the final form of LLMs yet. Chatbots made sense because of their prompting nature, but voice devices will likely become much better devices for LLMs.

3/ Search can deliver better user signals than prompting because it’s a more specific use case.

The beauty of prompting is that it’s an open field. You can do whatever you want. But that’s also a disadvantage because most people have no idea what they want to do and where to start.

As a result, success and failure are harder to measure at scale for chatbots than search engines.

A search engine, despite being versatile, has clearer use cases, which could drive more adoption and deliver better signals for LLMs to learn. In return, those learnings could transfer to chatbot answers, which are a big part of Search GPT.

4/ OpenAI wants to throw publishers a lifeline to secure a content pipeline. LLM developers need fresh content to train models and serve timely answers.

Search is the biggest source of publisher traffic2, but publishers are growing more frustrated with Google due to Algorithm updates, site reputation abuse penalties and AI Overviews.

It’s good timing for OpenAI to offer another source of revenue and get publishers “on their side”, especially after OpenAI itself has received a lot of criticism from publishers and a lawsuit from the NY Times.

The launch of SearchGPT follows a long list of publisher licensing deals:

  1. News Corp (+$250 million over five years): WSJ, New York Post, The Times, The Sun
  2. Associated Press (AP)
  3. Axel Springer: Bild, Politico, Business Insider
  4. Financial Times
  5. Le Monde
  6. Reuters
  7. Prisa Media
  8. Dotdash Meredith
  9. Time magazine
  10. Vox media
  11. Wiley (one-time fee of $23 million for previously published academic articles and books)

But even the best deals don’t help if publishers cannot sustain the creation of fresh content. If Search GPT can become a new traffic and revenue source for publishers, it would be a way to keep the critical ecosystem alive and get on the good side of publishers.

5/ Perplexity is a small challenger to OpenAI, but even a small challenger can take away mind share, and you never want to underestimate the competition. A search engine would conveniently fence in their growth. Why use Perplexity when Search GPT, which looks very similar, can do the same thing?

6/ OpenAI might bet on regulators breaking up Google’s exclusive search engine deal with Apple and hope to become part of a search engine choice set on Apple devices.

Granted, we’re talking about a very small chance, and certainly not the decisive factor for building a search engine, but it could be a small factor nonetheless.

Publisher GPT

Search GPT is clearly the sibling of Chat GPT. Besides SERP Features like weather charts and table stakes features like auto-suggest, the experience feels like Chat GPT.

The differences are hard to spot at first but meaningful in their potential to drive revenue, compete with Google and strengthen OpenAI’s data mining.

But one change stands out: Search GPT has more pronounced links to web results, a clear hat tip to publishers.

The Search GPT landing page mentions the word publisher 14 times and underlines how important publishers are for the open web and how dedicated OpenAI is to working with them.

OpenAI uses a different user agent to crawl websites for its search engine than for LLM training and strongly separates the two.

Importantly, SearchGPT is about search and is separate from training OpenAI’s generative AI foundation models. Sites can be surfaced in search results even if they opt out of generative AI training.

It’s not an accident that OpenAI tries to regain its grip on the web. A recently published study3 found that 25% of words (tokens) in Common Crawl stem from domains that have now excluded AI crawlers, with OpenAI at the top of the list, in their robots.txt or ToS.

SEO Implications

The two questions every SEO is asking themselves is whether they should care about Search GPT and how it might work.

Search GPT has a chance to become relevant for SEO quickly, given Chat GPT’s adoption. The Apple Intelligence integration and a potential phone would spur adoption even more.

However, OpenAI might integrate Search GPT into Chat GPT, which could change the relevance as a traffic source.

We cannot yet know how Search GPT works because it’s not live, but one big differentiator will be whether Search GPT includes results from the broad web or only from publishers OpenAI made a deal with.

If it’s the broad web, Search GPT has a high chance of being relevant. If it’s limited to partnering publishers, SEO won’t make sense for anyone not a partner because the answer set is limited.

If Search GPT uses RAG and ranks results similar to Google’s AI Overviews, we could use AIO performance as indicator and predictor for SearchGPT performance.

There is also chance that an answer from Chat GPT for queries that don’t require QDF (query deserves freshness) is the same on Search GPT, which would give us a way to understand what works before Search GPT launches publicly. Hard to validate without access Search GPT, though.

Search GPT could gain the web’s favor by sending relevant traffic, making it easy for sites to submit content, for example, through XML sitemaps, and providing some sort of webmaster console. As a result, Search GPT would position itself even stronger against Google.

A New Way To Search

If the main benefit or Search GPT for OpenAI is a revenue stream and access to more user data, the next logical step for OpenAI is to build a (AI-powered) browser.

Browser data is incredibly valuable for understanding user behavior, personalization and LLM training. Best of all, it’s app-agnostic, so OpenAI could learn from users even when they use Perplexity or Google.

We’ve seen the power of browser data in the Google lawsuit, where it turned out Google relied on Chrome data all along for ranking. The only layer that’s more powerful is the operating system and device layer.

Image Credit: Kevin Indig

There is already news that Sam Altman is working with Jon Ivy on building a phone. No wonder since Apple holds immense power over other ecosystems and platforms.

Remember when Apple blew a $10b hole into Meta’s annual revenue? Apple could develop its own models and surface them on the OS level—a critical threat to OpenAI. A browser could alleviate at least some of that threat.

Bing released its own update to Search, giving us an idea of what Search GPT could look like. The new Bing prominently features AI answers at the top and search results on the side. A fitting metaphor for classic blue links.

Image Credit: Kevin Indig

Why OpenAI Could Lose $5 Billion This Year

Who Sends Traffic on the Web and How Much? New Research from Datos & SparkToro

Consent in Crisis: The Rapid Decline of the AI Data Commons

How To Get Ad Creative AI-Ready In Search Ads via @sejournal, @LisaRocksSEM

PPC ads are at the core of many marketers’ digital media mix, so it is critical to stay on top of new features and ad tech advancements.

The hot topic and major changes involve AI to automate and improve the efficiency of pay-per-click campaigns.

However, it’s unsurprising that AI is not replacing the marketing strategist or PPC manager. On the contrary, the marketing team is more important than ever to provide high-quality manual creative ad assets to the ad platforms to inform the AI machine.

While this is an automated process, the advertiser is at the heart of this, providing the creative strategy, vision, and messaging. You power the AI.

Note: This article focuses on Google Ads due to their market share and volume, but many of these suggestions can be applied to any ad platform.

The AI power component comes into play through “automatically created assets,” where the “asset” is the headline or description of the ad copy. The “manual assets” refers to what the advertiser writes and enters into Google Ads.

Consider:

  • Automatically created assets generate new headlines and descriptions continuously to form ad copy that is relevant to the query and ad’s content.
  • Automated assets are used alongside and in combination with advertiser manual assets to show the search ad that is predicted to perform the strongest.
  • Reporting (asset details report) is available on automatically generated content, and advertisers can remove it if needed.

First, the advertiser must opt in at the campaign level. This is a good thing because there is prep work to be done to get the greatest benefit from AI tools.

opt-in to <span class=automatically created assets in Google Ads” width=”480″ height=”246″ class=”aligncenter” />

Next, prepare your inputs for AI-driven ads:

  • Landing page.
  • Manual assets.
  • Keywords.

1. Landing Page And Website

One key aspect of getting the most from AI is the landing page and website content. The AI algorithms need content to draw from, and the better the content, the more potential outputs will have.

After providing the URL, Google will find the relevant information on your page to generate ad creative.

Clear and ad-focused content. Clever, flowery marketing language or excessive jargon takes a back seat to concise, fact-based language.

Correct and up-to-date content. This can be a challenge for some smaller advertisers who do not have complete control over frequent content changes without incurring web dev fees. You may want to wait on AI right now if this is the case. For everyone else, be proactive and mindful about writing content for ads. Next, the “reactive” approach is to review reports to determine if web content should be altered or automated assets should be removed. The automated assets will change if the web content changes or assets are removed.

Content alignment. Ensure that the landing page content aligns with the ad copy. Keywords and messaging should be consistent with the search ads to maintain user trust and engagement.

Clear value proposition. Highlight the unique benefits or solutions your product/service offers. Use a compelling headline and subheadline that immediately captures the visitor’s attention.

Write strong CTAs. A common problem is advertisers simply not being clear in their CTAs in ad copy or on landing pages. Use action-oriented language repeated on the page and ad. Stick to one clear CTA. For a refresher, see this post.

Bonus: It is also a good idea to make a long-term plan to permanently change web content to aid AI in the future, as it is here to stay.

2. Manual Assets

The “manual assets” refer to the headlines and descriptions you write and enter into Google Ads.

Since the headlines and descriptions are served in many combinations, it’s important that the assets communicate a cohesive message. Enabling AI will allow automatically generated descriptions to be thrown into the mix.

Headlines: With up to 15 headlines with a maximum of 30 characters, try to assign a strategy to the type and number of headlines. For example, include:

  • Brand name: Although your company will appear in the “business name” field, it might not be the brand or product name you need to communicate here. Control the language and use a pin if needed.
  • Keywords from the adgroup.
  • CTA: ensure this is consistent and consider pinning this headline.
  • Feature/benefit.

Descriptions: Up to 4 total, with up to 90 characters per description.

While you want to use keywords, unique headlines, and consider Google suggestions, the ad should ultimately be clear, concise, and persuasive enough to get users to click and convert.

3. Keywords

In search campaigns with automated assets, keywords still matter and inform Google and, ultimately, your customer what the ad is about.

We want to stick with best practices in account structure and keyword selection. The keywords should always be closely connected to your ad and landing page.

If you have not been a perfectionist on this recently, now is a good time to review and evaluate your keywords, keeping AI automation in mind.

A great shortcut: While in the ad creation tool, venture into the ad suggestions area where Google will show you “popular keywords from the adgroup” you can click to add as a headline.

Finally, Performance

  • You can see the automatically created assets that were generated and served by reviewing the asset detail report.
  • Remove assets you don’t like: under the Responsive search ad > View asset details.

automated asset details in google ads

Conclusion

One of the misconceptions about AI in ads and automatically generated assets is that it’s a “set and forget” model that solves problems with ad management resources, time, and performance.

As we can see above, the process to enable AI takes knowledge, strategy, and ongoing management to ensure the messages served to your ideal customers can meet and exceed what is done manually.

This can only be achieved with humans setting it up for success.

Learn more on this topic: Google Ads: A Quick Guide To Every AI-Powered Ad Creative Feature (And What’s Coming Soon)

More resources:


Featured Image: VectorMine/Shutterstock