How Digital Has Changed Branding 

This edited extract is from Digital Branding by Daniel Rowles ©2025 and is reproduced and adapted with permission from Kogan Page Ltd.

If you could only get a feel for someone’s personality by them telling you things about themselves, we may end up with a very shallow understanding of them.

We may also have difficulty believing in the personality that has been constructed – and we may start to question the motivations behind what they are telling us about themselves.

That is exactly the situation of commercial branding that uses broadcast channels such as TV.

A personality is sculpted, and then we are told what the personality is. We don’t get to discuss, engage with, and really understand the true personality.

Digital media now means, however, that the conversation is no longer one way. I can challenge, ask questions, and develop a truer picture of the brand. I can see through a sculpted brand and start to see it for what it truly is.

This can be a scary thing for many traditional brands. It can also be a huge opportunity.

AI And Branding

Artificial intelligence (AI) has had a profound impact on the world around us in very recent months and years.

Things that were science fiction, or at least extremely difficult to do unless you had sufficient expertise and resources, are now made possible with tools available to everyone.

These AI-based tools and their application can have a huge impact on our digital branding.

This can be as simple as giving us the opportunity to create a better user experience by using machine learning to help us optimize a website experience (for example, by giving somebody content that is customized for them specifically) or by generating an image that follows our brand guidelines using generative AI.

However, beyond these AI tools that help us become more efficient and more effective, we also find ourselves faced with new risks and ethical challenges.

Let’s take a real-world example. Using a low-cost tool like Eleven Labs, I can create a completely convincing deepfake of my own voice. I can then type in any text I want and get the deepfake voice to read that text.

The voice sounds exactly like me, and I can even generate the same speech multiple times, and each time it will sound slightly different and have ever so slightly different intonation, just as I would if I read the same thing multiple times.

To test the quality of this voice, I replaced a segment of me speaking on the Digital Marketing Podcast, which has over 150,000 listeners. Nobody noticed.

So if I can deepfake myself, do I need to even bother recording podcasts anymore? Do I need to inform my listeners?

Legally, I don’t have to, but I would suggest ethically I should. If I trust you, and I suddenly find that after listening to your voice for some time, that it wasn’t really you, it will damage my trust.

AI gives our brands a huge amount of tools to improve what we do, but we also need to be careful how and when we use these tools, so we don’t damage the trust in our brands.

Any use of AI needs to be done mindfully, considering the impact it may have.

Global Soapbox

If a brand is essentially the personality of something, digital media gives us the ability and opportunity to understand the true personality of something.

We can then use that understanding to help guide us in our decision-making processes.

This is a great opportunity from a customer point of view.

For example, it means that instead of being put on hold for an hour when phoning a call centre and having little choice but to tolerate it, I can now go straight to one of many social media channels and make my frustrations very clear and very visible.

I now have a global soapbox with access to all of the other potential customers out there, and I can impact a global organization’s brand in a way that was not possible before (or, at least, was incredibly difficult).

That highly visible complaint then becomes part of other people’s brand perception (fairly or not), and suddenly, the years of building a brand can be tumbled very quickly.

This is a very much changed environment for businesses to operate in – if they ignore this change, then it can lead to problems.

This ability to engage with and research into a brand can also be looked at from an even simpler point of view.

Perhaps I am researching buying a car or a B2B service. I can now do a lot of research and inform my decision before I speak to the car dealership or service vendor.

When I do make this final step, I am far more informed and have developed a fairly in-depth perception of the brand before I engage directly with them.

In fact, from the information I gleaned online, I may have opted out from even considering certain brands.

That information may have been on a third-party website in the form of a review or comment from someone I have never met, but I may trust it over the voice of the brand itself.

Social Media Fail

This fast-changing environment and the slow pace of businesses to adapt to it is leading the social media disaster stories that we see on a daily basis online.

Most social media disasters demonstrate a lack of knowledge of how to practically use a particular social media channel or show a belief that the brand can manipulate the channel in some way and get away from this need for authenticity and transparency.

The other common theme is that of failing to understand the changed role of the brand in this two-way conversation.

Traditional Brand Metrics

Traditionally, a brand has been measured by asking questions and trying to judge what someone thinks of a brand, and trying to work out what this means in regard to potential sales.

There is a wide range of different ways of looking at this, but generally, we would take some sort of sample survey of our audience and see what their attitudes were before and after exposure to some form of marketing.

This survey would ask a range of questions, and there are lots of different approaches, but fundamentally, we would look to answer the following questions:

  • Are you aware of the brand?
  • Do you like the brand?
  • Do you intend to buy the brand?
  • If you have purchased, do you intend to do it again?

Essentially, we are assuming that if we can get more people to answer positively to each of these questions, we are likely to get more sales.

This can still be an extremely valid process, but only when effectively integrated into an overall approach.

Sum Of All Experiences

Essentially, digital branding is the personality of our organization, service, or product created by the sum of all experiences that an individual has with that brand.

This still includes things such as visual identity, but now also includes much more important and influential touchpoints such as social media interactions and online reviews.

Your logo may make you recognizable, but it is your overall brand that decides what I remember you for.


To read the full book, SEJ readers have an exclusive 25% discount code and free shipping to the US and UK. Use promo code SEJ25 at koganpage.com here.


More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google Confirms Alt Text Is Not Primarily An SEO Decision via @sejournal, @martinibuster

Google’s John Mueller shared Jeffrey Zeldman’s Bluesky post reminding publishers and SEOs of proper alt text usage, including a link to the W3C decision tree for guidance. The most important takeaway is that the decision process for alt text is not primarily an SEO decision.

The W3C (World Wide Web Consortium) is an international standards making body for the Internet. A lot of the guidance that Google provides about how Googlebot crawls HTML and treats server response codes are based on the web standards developed by the W3C, so it’s always a good idea to go straight to the source to understand exactly how to deploy HTML (like alt text) because doing it the right way will very likely align with the same standards that Google is using.

A decision tree is basically a decision making tool or diagram that asks a yes or no question. If the answer is “no” then the tree leads to another branch. Answering “yes” leads to a node that advises on what to do. The purpose of the W3C Alt Text decision tree is to guide publishers and SEOs on the proper use of alt text, which is for accessibility.

The decision tree that Zeldman linked to has five questions:

  1. Does the image contain text?
  2. Is the image used in a link or a button, and would it be hard or impossible to understand what the link or the button does, if the image wasn’t there?
  3. Does the image contribute meaning to the current page or context?
  4. Is the image purely decorative or not intended for users?
  5. Is the image’s use not listed above or it’s unclear what alt text to provide?

Google’s John Mueller Affirms Proper Use Of Alt Text

John Mueller did a repost on Bluesky with the additional insight that the decision making process for alt text is not “primarily” an SEO decision, meaning that accessibility should be the first consideration when deciding how to use alt text.

This is what John Mueller said about alt text:

“The choice of ALT text is not primarily an SEO decision.

If you like working with structured processes, check out, bookmark, share, and use this decision tree of when & what to use as ALT text, when it comes to accessibility.”

Zeldman’s post praised the simplicity of the decision tree:

“So straightforward, so good. An ALT text decision tree. “

Someone else posted a link to an interactive version of the decision tree called the “Alt text decide-o-matic” which is a different way to interact with the decision tree.

Check out the W3C Alt text decision tree here or try the decide-o-matic to become better acquainted with alt text best practices and become a better SEO and publisher in the process.

Featured Image by Shutterstock/Master1305

Useful quantum computing is inevitable—and increasingly imminent

On January 8, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away, at the same time suggesting those computers will need Nvidia GPUs in order to implement the necessary error correction. 

However, history shows that brilliant people are not immune to making mistakes. Huang’s predictions miss the mark, both on the timeline for useful quantum computing and on the role his company’s technology will play in that future.

I’ve been closely following developments in quantum computing as an investor, and it’s clear to me that it is rapidly converging on utility. Last year, Google’s Willow device demonstrated that there is a promising pathway to scaling up to bigger and bigger computers. It showed that errors can be reduced exponentially as the number of quantum bits, or qubits, increases. It also ran a benchmark test in under five minutes that would take one of today’s fastest supercomputers 10 septillion years. While too small to be commercially useful with known algorithms, Willow shows that quantum supremacy (executing a task that is effectively impossible for any classical computer to handle in a reasonable amount of time) and fault tolerance (correcting errors faster than they are made) are achievable.

For example, PsiQuantum, a startup my company is invested in, is set to break ground on two quantum computers that will enter commercial service before the end of this decade. The plan is for each one to be 10 thousand times the size of Willow, big enough to tackle important questions about materials, drugs, and the quantum aspects of nature. These computers will not use GPUs to implement error correction. Rather, they will have custom hardware, operating at speeds that would be impossible with Nvidia hardware.

At the same time, quantum algorithms are improving far faster than hardware. A recent collaboration between the pharmaceutical giant Boehringer Ingelheim and PsiQuantum demonstrated a more than 200x improvement in algorithms to simulate important drugs and materials. Phasecraft, another company we have invested in, has improved the simulation performance for a wide variety of crystal materials and has published a quantum-enhanced version of a widely used materials science algorithm that is tantalizingly close to beating all classical implementations on existing hardware.

Advances like these lead me to believe that useful quantum computing is inevitable and increasingly imminent. And that’s good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve.

We should care about the prospect of useful quantum computers because today we don’t really know how to do chemistry. We lack knowledge about the mechanisms of action for many of our most important drugs. The catalysts that drive our industries are generally poorly understood, require expensive exotic materials, or both. Despite appearances, we have significant gaps in our agency over the physical world; our achievements belie the fact that we are, in many ways, stumbling around in the dark.

Nature operates on the principles of quantum mechanics. Our classical computational methods fail to accurately capture the quantum nature of reality, even though much of our high-performance computing resources are dedicated to this pursuit. Despite all the intellectual and financial capital expended, we still don’t understand why the painkiller acetaminophen works, how type-II superconductors function, or why a simple crystal of iron and nitrogen can produce a magnet with such incredible field strength. We search for compounds in Amazonian tree bark to cure cancer and other maladies, manually rummaging through a pitifully small subset of a design space encompassing 1060 small molecules. It’s more than a little embarrassing.

We do, however, have some tools to work with. In industry, density functional theory (DFT) is the workhorse of computational chemistry and materials modeling, widely used to investigate the electronic structure of many-body systems—such as atoms, molecules, and solids. When DFT is applied to systems where electron-electron correlations are weak, it produces reasonable results. But it fails entirely on a broad class of interesting problems. 

Take, for example, the buzz in the summer of 2023 around the “room-temperature superconductor” LK-99. Many accomplished chemists turned to DFT to try to characterize the material and determine whether it was, indeed, a superconductor. Results were, to put it politely, mixed—so we abandoned our best computational methods, returning to mortar and pestle to try to make some of the stuff. Sadly, although LK-99 might have many novel characteristics, a room-temperature superconductor it isn’t. That’s unfortunate, as such a material could revolutionize energy generation, transmission, and storage, not to mention magnetic confinement for fusion reactors, particle accelerators, and more.

AI will certainly help with our understanding of materials, but it is no panacea. New AI techniques have emerged in the last few years, with some promising results. DeepMind’s Graph Networks for Materials Exploration (GNoME), for example, found 380,000 new potentially stable materials. At its core, though, GNoME depends on DFT, so its performance is only as good as DFT’s ability to produce good answers. 

The fundamental issue is that an AI model is only as good as the data it’s trained on. Training an LLM on the entire internet corpus, for instance, can yield a model that has a reasonable grasp of most human culture and can process language effectively. But if DFT fails for any non-trivially correlated quantum systems, how useful can a DFT-derived training set really be? We could also turn to synthesis and experimentation to create training data, but the number of physical samples we can realistically produce is minuscule relative to the vast design space, leaving a great deal of potential untapped. Only once we have reliable quantum simulations to produce sufficiently accurate training data will we be able to create AI models that answer quantum questions on classical hardware.

And that means that we need quantum computers. They afford us the opportunity to shift from a world of discovery to a world of design. Today’s iterative process of guessing, synthesizing, and testing materials is comically inadequate.

In a few tantalizing cases, we have stumbled on materials, like superconductors, with near-magical properties. How many more might these new tools reveal in the coming years? We will eventually have machines with millions of qubits that, when used to simulate crystalline materials, open up a vast new design space. It will be like waking up one day and finding a million new elements with fascinating properties on the periodic table.

Of course, building a million-qubit quantum computer is not for the faint of heart. Such machines will be the size of supercomputers, and require large amounts of capital, cryoplant, electricity, concrete, and steel. They also require silicon photonics components that perform well beyond anything in industry, error correction hardware that runs fast enough to chase photons, and single-photon detectors with unprecedented sensitivity. But after years of research and development, and more than a billion dollars of investment, the challenge is now moving from science and engineering to construction.

It is impossible to fully predict how quantum computing will affect our world, but a thought exercise might offer a mental model of some of the possibilities. 

Imagine our world without metal. We could have wooden houses built with stone tools, agriculture, wooden plows, movable type, printing, poetry, and even thoughtfully edited science periodicals. But we would have no inkling of phenomena like electricity or electromagnetism—no motors, generators, radio, MRI machines, silicon, or AI. We wouldn’t miss them, as we’d be oblivious to their existence. 

Today, we are living in a world without quantum materials, oblivious to the unrealized potential and abundance that lie just out of sight. With large-scale quantum computers on the horizon and advancements in quantum algorithms, we are poised to shift from discovery to design, entering an era of unprecedented dynamism in chemistry, materials science, and medicine. It will be a new age of mastery over the physical world.

Peter Barrett is a general partner at Playground Global, which invests in early-stage deep-tech companies including several in quantum computing, quantum algorithms, and quantum sensing: PsiQuantum, Phasecraft, NVision, and Ideon.

The Download: China’s DeepSeek, and useful quantum computing

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How a top Chinese AI model overcame US sanctions

The AI community is abuzz over DeepSeek R1, a new open-source reasoning model. 

The model was developed by the Chinese AI startup DeepSeek, which claims that R1 matches or even surpasses OpenAI’s ChatGPT o1 on multiple key benchmarks but operates at a fraction of the cost.

DeepSeek’s success is even more remarkable given the constraints facing Chinese AI companies in the form of increasing US export controls on cutting-edge chips. But early evidence shows that these measures are not working as intended. Rather than weakening China’s AI capabilities, the sanctions appear to be driving startups like DeepSeek to innovate in ways that prioritize efficiency, resource-pooling, and collaboration. Read the full story.

—Caiwei Chen

Useful quantum computing is inevitable—and increasingly imminent

—Peter Barrett is a general partner at Playground Global, which invests in early-stage deep-tech companies

On January 8, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away, at the same time suggesting those computers will need Nvidia GPUs in order to implement the necessary error correction. 

However, history shows that brilliant people are not immune to making mistakes. Huang’s predictions miss the mark, both on the timeline for useful quantum computing and on the role his company’s technology will play in that future.

I’ve been closely following developments in quantum computing as an investor, and it’s clear to me that useful quantum computing is inevitable and increasingly imminent. And that’s good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 AI pioneers are clashing over its potential dangers  
Yann LeCun, Meta’s AI chief scientist, has branded experts’ grave warnings hypocritical. (FT $)
+ AI’s biggest cheerleaders tend to know the least about it. (Wired $)
+ How existential risk became the biggest meme in AI. (MIT Technology Review)

2 This surveillance tech could enable Donald Trump’s deportation plans
From mass biometric databases to phone jailbreaking tools. (NYT $)
+ It really doesn’t have to be like this. (The Atlantic $)
+ Trump has declared policing the US-Mexican border his “number one issue.” (FT $)
+ He’s ordered the end of the CBP One border migration app. (MIT Technology Review)

3 The European Union is watching Big Tech like a hawk
It’s concerned about disinformation spreading ahead of next month’s German election. (Bloomberg $)

4 Trump’s meme coins are bad news for the crypto industry
The community was hoping the President would legitimize cryptocurrency, rather than leaning into its scammier side. (WP $)
+ It’s a blow to the fans hoping he’ll ‘make Bitcoin great again.’ (The Guardian)
+ Trump’s biggest supporters stand to lose the most from his crypto grift. (Vox)

5 AI is helping to pin down what caused the Los Angeles wildfires 
Determining the truth could take months. AI is speeding that process up. (Wired $)

6 Elon Musk’s gaming skills are under fire
Hardcore gamers are questioning how he was seemingly playing during Trump’s inauguration. (NYT $)

7 The European Medicines Agency has had enough of X
And has moved to Bluesky instead. (Reuters)

8 Vietnam is deploying robots to help run its postal service
Including delivering parcels and sorting packages in warehouses. (Rest of World)

9 Startups are in for a rough year
Thousands of companies were funded between 2020 and 2021. Now, plenty are shutting down. (TechCrunch)
+ Gaming startups in the UK are struggling for cash. (BBC)

10 A newly-discovered asteroid turned out to be Musk’s Tesla Roadster
The car and its mannequin driver have been floating in space since 2018. (USA Today)
+ The world’s next big environmental problem could come from space. (MIT Technology Review)

Quote of the day

“I think within five years, nobody in their right mind would use them anymore.”

—Yann LeCun, Meta’s chief AI scientist, says he believes that the technologies powering the current wave of large language models will soon become obsolete, TechCrunch reports.

The big story

How culture drives foul play on the internet, and how new “upcode” can protect us

August 2023

From Bored Apes and Fancy Bears, to Shiba Inu coins, self-­replicating viruses, and whales, the internet is crawling with fraud, hacks, and scams.

And while new technologies come and go, they change little about the fact that online illegal operations exist because some people are willing to act illegally, and others fall for the stories they tell.

Ultimately, online crime is a human story. But why does it work, and how can we protect ourselves from falling for such schemes? Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I can’t believe these albums were released 50 years ago: featuring Bob Dylan, Donna Summer, and The Boss.
+ What one man’s search for happiness taught him about himself.
+ More twins are being born than ever before—but why? 👯
+ Wolfgang Amadeus Mozart was born on this day in 1756. Enjoy this stunning piano concerto in his honor!

Plain-English Guide to Shopify Liquid

Liquid is Shopify’s template engine that brings backend store data to the public-facing front-end. Merchants who understand Liquid can unlock new store customizations without needing a developer.

In this article, I will explain the basics.

Liquid is a bridge between a store’s content and how it’s displayed. The template resides in a file with the .liquid suffix for every page or section. Hence main-product.liquid contains the product template.

Screenshot of the main-product.liquid code.

The product template resides in main-product.liquid. Click image to enlarge.

Shopify organizes files based on what they do. For example, the “sections” folder contains files defining entire parts of a site, such as headers or a product gallery. The “snippets” folder applies to smaller components, such as buttons or a specific design element.

Snippets can reside in section folders. A merchant could create a custom button in a snippet file and include it in the section folder, for example. To place “snippet_name.liquid” inside “main-product.liquid,” I would position my cursor at the right point in the product file and add {% render ‘snippet_name’ %}.

Syntax

Liquid functions with defined terms and phrases — a syntax.

Variables

Variables are the representation of dynamic information. For example, {{ product.title }} dynamically displays the title of the product. Note how Liquid uses double curly braces ( {{ }} ) to pull the variable information.

Objects

Objects are collections of data. Examples include product, collection, and customer.

  • product holds all the information about a specific product, such as title, ID, description, and price. To display a product’s price, create a variable {{ product.price }} where product is the object and price is the property. Shopify publishes a list of all product object properties.
  • collection represents a group of products, such as a category. Pull information from the collection object for every product assigned to it or for assigned information such as title, description, and product count. Here’s Shopify’s list of collection properties.
  • customer contains info about the logged-in user, such as name, email address, physical address, marketing consent, and order preferences. Here are all customer properties.

Tags

Tags add logic to Liquid code via two main types, “control flow” and “iteration.”

Control-flow tags drive logic, such as if/else statements.

{% if product.available %}
This product is in stock!
{% else %}
Sorry, this product is out of stock.
{% endif %}

Iteration tags repeat actions, such as looping through products in a collection.

{% for product in collection.products %}
{{ product.title }}
{% endfor %}

Filters

Filters transform the data Liquid retrieves. For example:

  • {{ product.title | upcase }} displays the product title in uppercase letters.
  • {{ product.price | times: 1.2 }} increases the price by 20%.

Custom Message Example

Here’s a real-life example. Imagine you want to display a custom message on your product pages under the title when an item is in stock or out of stock.

Here’s how to do it.

  1. In the Shopify admin, navigate to Online Store > Themes > Actions (left button with dots …) > Edit Code.
  2. Find and open the main-product.liquid file from the sections folder.
  3. Search {%- when ‘title’ -%} using (Ctrl + F) to locate the title.

Position the cursor under the closing /div and add:

{% if product.available %}

This product is available! Get it while stocks last!

{% else %}

Sorry, this product is currently out of stock.

{% endif %}
Screenshot of code for adding the custom message.

Add a custom message when an item is in stock or out of stock. Click image to enlarge.

Save and preview. Save your changes and preview the store. In this example, shoppers will see a green message when an item is in stock and a red message when out of stock.

The message “Sorry, the product is currently out of stock” appears in red. Click image to enlarge.

Getting Started

Experimenting and testing is the best way to learn.

  • Back up your theme. Always duplicate your theme before changing it. Click Actions > Duplicate in the Themes section of the admin.
  • Use Preview mode. Shopify allows previews of changes before taking them live.
  • Start small. Begin with minor changes.

For more on Liquid, see:

‘Your Money or Your Life,’ Defined by Google

Google hires human teams to review the search results for various queries and assign quality ratings for each ranking URL. Google says the purpose is to help ensure helpful content for searchers.

The ratings do not impact search results directly, but Google’s guidelines for the human raters suggest its ranking priorities — what it looks for — and algorithmic scrutiny.

Here’s Google’s video explainer:

Google updated the guidelines (PDF) last week.

A recurring focus is what Google calls “Your Money or Your Life” topics, which can include ecommerce. Google’s human raters are to review YMYL pages more closely.

YMYL Topics

Your Money or Your Life topics affect a person’s health, safety, financial stability, and well-being.

Some pages clearly fall into that category; others are not as straightforward. Google provides a few examples in the latest guidelines.

Type of Topic Clear YMYL Topic Possible YMYL Topic Unlikely YMYL Topic
Information Evacuation routes for a tsunami Weather forecast Music award winners
Personal opinion Personal view of why a racial group is inferior Personal view of why a exercise is inferior Personal view of why a rock band is inferior
Ecommerce and product reviews Purchasing prescription drugs Review of a car Purchasing pencils

YMYL and EEAT

Sites that provide health- or money-related advice or sell products that can affect health or wealth must have clear signs of (i) high-level expertise and (ii) first-hand experience with the topic.

The guidelines provide much detail on how the raters should assess EEAT — the Expertise and Experience of the author and the Authoritativeness and Trustworthiness of the site — by reviewing these elements:

  • Contact page (with the address) or customer support page (important for online stores that process payments).
  • About page detailing the business’s history, milestones, awards, and achievements. The Organization schema type can help search engines extract the essential info.
  • Shipping and return policies, terms of service, cookie policy, privacy policy.
  • Detailed author profiles describing expertise and experience.
  • Positive branded search results that reflect the business’s reputation. Google encourages raters to search for the site and author names.
  • Original “opinion” or “expert” content.
  • Detailed methodology for product reviews.
  • Citations from trusted sources (government, official) on content pages.

Google also lists elements that should not be considered for evaluating EEAT:

  • Ads (unless they prevent visitors from reading or engaging with a page).
  • Broken links (unless excessive).

None of those EEAT elements are confirmed algorithm factors; Google includes them in the guidelines to assist human evaluators. The factors are Google’s definition of EEAT and presumably have a ranking role — algorithmic or manual — for all sites, especially YMYL-focused.

How YouTube’s Recommendation System Works In 2025 via @sejournal, @MattGSouthern

In a recent video interview, YouTube Liaison René Ritchie spoke with Todd Beaupré, YouTube’s Senior Director of Growth & Discovery, to discuss the platform’s recommendation system functions and what creators can expect this year.

Their discussion revealed how time of day, device type, viewer satisfaction, and the advent of large language models (LLMs) are reshaping YouTube’s algorithms.

Here’s what you need to know about YouTube’s recommendation system and how it works.

Personalized Recommendations

One of the central themes of the interview is YouTube’s focus on matching content to individual viewer preferences.

According to Beaupré:

“Often times creators will say hey, uh the recommendation system is pushing out my video to people or why isn’t it pushing out my video yes they they may ask that and the way the work it works is it… isn’t so much about pushing it out as much as it’s pulling…”

He goes on to explain that YouTube’s home feed prioritizes content based on what each viewer is most likely to enjoy at any given moment:

“When you open the homepage, YouTube is going to say hey Rene is here, we need to give Rene the best content that is going to make Rene happy today.”

Metrics & Satisfaction

While click-through rate (CTR) and watch time remain important, YouTube’s system also accounts for user satisfaction gleaned through direct surveys and other feedback signals.

Beaupré notes:

“We introduced this concept of satisfaction… we’re trying to understand not just about the viewer’s behavior and what they do, but how do they feel about the time they’re spending.”

He explains YouTube’s goal is to cultivate long-term viewer satisfaction:

“…we look at things like likes, dislikes, these survey responses… we have a variety of different signals to get at this satisfaction… we want to build a relationship with our audience just as creators want to do with their fans.”

Evergreen & Trending Content

YouTube’s algorithms can identify older videos that become relevant again due to trending topics, viral moments, or nostalgic interests.

Beaupré cites the system’s ability to pivot:

“…maybe like right now there’s a video that that reaches a certain audience but then like in six months… that makes this video relevant again… if it’s relevant and maybe to a different audience than enjoyed it the first time.”

Context: Time, Device, & Viewer Habits

Beaupré revealed YouTube’s system may show different kinds of content depending on whether someone is watching in the morning or at night, on a mobile phone or a TV:

“The recommendation system uses time of day and device… as some of the signals that we learn from to understand if there’s different content that is appealing in those different contexts… if you tend to have a preference for watching news in the morning and comedy at night… we’ll try to learn from other viewers like you if they have that pattern.”

Fluctuations In Views

Creators often worry if their views dip, but Beaupré suggests this can be a natural ebb and flow:

“…the first thing is that that is natural… it’s not particularly reasonable to expect that you’re going to always be at your highest level of views from all time… I would encourage you not to worry about it too much…”

He also recommends comparing metrics over longer periods and leveraging tools like Google Trends:

“…we do see seasonality can play a role… encourage you to look beyond… 90 days or more to kind of see the full context.”

Multi-Language Audio

Many creators are exploring multilingual audio to broaden their audiences.

Beaupré highlights how YouTube has adapted to support dubbed tracks:

“…we needed to add some new capabilities… aware that this video actually is available in multiple languages… so if you’re a Creator who’s interested in extending your reach through dubs… make sure that your titles and descriptions… are also uploaded [in] translated titles and descriptions…”

He also emphasizes consistency:

“We’ve seen in particular creators who dub at least 80% of the… watch time… tend to have more success than those who dub less…”

LLM Integration

Looking to the future, large language models (LLMs) enable YouTube to better understand video content and viewer preferences.

Beaupré says:

“…we’ve applied the large language model technology to recommendations at YouTube to… make them more relevant to viewers… rather than just memorizing that this video tends to be good with this type of viewer… it might actually be able to understand the ingredients of the dish better and maybe some more elements of the video style…”

Beaupré likens it to an expert chef who can adapt recipes:

“…we want to be more like the expert chef and less like the… memorized recipe.”

Key Takeaways For Creators

Here are the top takeaways from their 21-minute conversation on the YouTube recommendation system.

  1. The recommendation system is about “pulling” content for each viewer, not pushing videos universally.
  2. Metrics like CTR and watch time matter, but satisfaction (likes, dislikes, surveyed feedback) is also essential.
  3. YouTube can resurface older videos if renewed interest emerges.
  4. Time of day and device usage influence recommendations.
  5. View fluctuations are normal—seasonality, trending events, and external factors can all be at play.
  6. Dubbing and translated titles may help reach new markets, especially if a high percentage of your content is available in the same language.
  7. Large language models empower more nuanced understanding—creators should stay attuned to how this impacts discovery.

Watch the full interview below.

YouTube plans to share more updates at VidCon later this year.


Featured Image: Mamun_Sheikh/Shutterstock

DeepSeek-R1: The Open-Source AI Challenging ChatGPT via @sejournal, @MattGSouthern

DeepSeek-R1 is a new AI reasoning model from the Chinese company DeepSeek.

Released on January 20, it offers a cost-effective alternative to ChatGPT.

Here’s why it’s DeepSeek-R1 is trending across the web right now.

Key Features

Human-Like Thinking

DeepSeek-R1 has advanced reasoning skills that help it solve complex problems in math, logic, and coding.

People praise its ability to mimic human-like thinking. It breaks problems down into smaller steps using a “Chain of Thought” (CoT) method.

As it processes its responses, DeepSeek-R1 can adjust answers in real time and experience “aha” moments while solving tricky problems.

Here’s a screenshot from DeepSeek’s research paper (PDF link) demonstrating where this moment occurred:

Screenshot from: DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via
Reinforcement Learning, January 2025.

Here’s another screenshot more representative of what you’ll likely see when you use the web interface. This is DeepSeek’s thought process when presented with an SEO-related question:

Screenshot from: chat.deepseek.com, January 2025.

Its chain of thought continued for numerous paragraphs before finally generating a response.

Open Source

DeepSeek-R1 is an open-source model released under the MIT license, which means anyone can use and modify its code.

This openness makes DeepSeek-R1 appealing to businesses, startups, and developers seeking affordable AI solutions.

Lower Development Cost

While companies like OpenAI have spent hundreds of millions to develop their models, DeepSeek-R1 was reportedly built with a budget of just $6 million.

DeepSeek achieved this by using data more efficiently and applying reinforcement learning strategies.

This cost-efficiency was achieved by optimizing data usage and applying reinforcement learning strategies in a novel way that departed from conventional supervised fine-tuning processes typically used to train large language models.

This reduced the need for large amounts of computing power, making it more affordable for end-users.

Affordable Pricing

DeepSeek-R1’s competitive pricing is another factor contributing to its growing popularity.

It’s completely free to use through chat.deepseek.com. And if your machine has the necessary specs, you can also run the model locally on your computer at no cost.

For those without such resources, DeepSeek offers a cloud-based API service at prices far below industry standards.

Additionally, DeepSeek offers a cloud-based API service. Accessing the model through this API incurs costs, but the pricing is notably lower than many competitors.

Is It Any Good?

While DeepSeek-R1 is praised for being affordable and open-source, opinions on its performance vary.

Many benchmarks show it performs on par with OpenAI’s o1 model in areas like logical reasoning and problem-solving.

While DeepSeek-R1 may have unseen limitations, it’s a helpful option for tasks requiring systematic, step-by-step reasoning.

Its open-source nature allows for rapid iteration, making it a dynamic and evolving tool.

What People Are Saying

The release of DeepSeek-R1 has sparked widespread discussion about its potential to democratize access to AI.

The model’s launch also carries geopolitical significance.

Analysts view DeepSeek-R1 as a demonstration of China’s advancements in AI, particularly in light of U.S. technology export controls.

By achieving competitive results with a fraction of the resources, DeepSeek highlights the growing global competition in AI.

Community Reactions

Here’s a roundup of discussions you may have missed over the weekend:

Looking Ahead

DeepSeek-R1 represents a milestone in the AI race, offering a high-performance, cost-effective alternative to established tools.

While it may not yet outperform its competitors in every aspect, its affordability and accessibility position it as a transformative tool for many applications.

Broader Market Impact

The release of DeepSeek-R1 is impacting global markets, particularly in AI and technology. After its launch, tech stocks experienced sharp declines as investors reevaluated the need for large hardware investments.

Nvidia, for example, lost over $300 billion in market value, the largest single-day loss for any company.

This is a developing story…

Should You Still Use WordPress? via @sejournal, @alexmoss

The last year has been quite eventful within the WordPress ecosystem, marked by discussions and disputes that have prompted some to question the long-term sustainability of the platform and consider exploring alternative solutions.

However, when you dive deeper into these concerns, they pale in comparison to WordPress’s enduring strengths as a product and its unmatched versatility in the CMS space.

Market Share Is Sustained

The first version of WordPress was released on May 27, 2003. Its unique architecture, combined with it being open source, saw a surge in popularity – now dominating (as of December 2024) 43.7% of the CMS market share.

This growth has also been steady despite the emergence of other CMSs including Shopify, Wix and Squarespace. But none of these are open-source.

Not “Just Another” CMS

WordPress’ unique architecture, which allows plugin and theme extendibility, combined with the power of open-source, means it can scale far beyond the blogging platform it was originally.

This means themes could be built for end-users without them having to necessarily understand HTML or CSS and plugins made by third-party developers that could extend and scale the core platform in limitless ways.

Over the years, I’ve built so much with WordPress, including forums, job boards, educational and learning-based portals, ecommerce sites, communities, comparison engines, and scaled themes.

Our clients have ranged from people performing small side hustles and launching startups to managing hundreds of installs on a custom-built server for scaling sites for different global regions and building custom APIs using WordPress to scale activity.

No other CMS provides this flexibility – with any alternatives only doing so with a huge cost and technical debt.

Extendibility [Mostly] Without Limits

While I’ve been involved in building many things with WordPress, the concept of third-party plugins paved the way for trusted extendability within the CMS.

The emergence of free and premium plugins in the WordPress ecosystem has created its own niche, with over 59,000 free plugins available within the official directory – and this does not include the thousands of plugins available away from the repo.

When I discovered WordPress in 2009, I realized it wasn’t just a blogging platform – and, more importantly, it was the most cooperative to SEO best standards. I published my first of numerous plugins in October 2010.

In 2013, I co-founded an agency with WordPress development as one of its core offerings where we have worked on thousands of WordPress sites. Whenever we received an inquiry or pitched for anything to build, WordPress was always the CMS that provided the best solution.

It’s also so easy to get started with WordPress, and I advise anyone who wants to extend their knowledge to do so through experimentation (I talked about it at BrightonSEO a couple of months ago)

A Truly Passionate Community

Another reason I loved WordPress was when I discovered that the community’s massive support helped the CMS progress.

Like the SEO community, the WordPress community is extremely engaging and supportive, not just to help solve issues and help develop the CMS overall, but also there’s a lot of support for people’s professional and personal development.

Away from the extensive resources that WordPress provides through its documentation and forums, the WordPress community thrives all year round through its WordCamp meetups worldwide and participates in other communities, including WordPress chat and PostStatus. There are also numerous podcast series to follow, including Do the Woo, WP Product Talk, and WP Builds.

This community is extremely supportive and resilient to changes, which in turn helps the development of WordPress core, which has been instrumental in shaping the future of website creation, production, and improvement.

The community also gives back in ways I haven’t seen in other verticals.

You Own Your Site And Data

You’d think this would be obvious, but it isn’t. Remember, WordPress is open source. Not only does this mean that the core product is completely free and supported by the community, as already mentioned, but it is also your property.

To now compare this to Shopify, Wix, and Squarespace – these other CMS platforms own your site!

While people may believe there are inherent risks with WordPress, I’d ask you what would happen to your site if, for example, Shopify were to close business tomorrow.

The answer is simple – your site would cease to exist. Then what? If WordPress were to stop development forever from today, you’d still be in the same position and still have your website, content, and data – all of it. What do the other CMSs offer?

As another layer of freedom and risk mitigation, the choice of using WordPress as a CMS is also independent of where you choose to host it.

If for whatever reason you want to migrate your site from one host to another you have the freedom to do so whereas closed source platforms not only own the website you produce with them, but also the server where it resides.

Do The Recent WP ‘Disagreements’ Present A Risk To WordPress?

TLDR – no. The situation does not affect the functionality of WordPress as a CMS. WordPress remains a stable, reliable, and widely used platform, with no indication that its long-term sustainability is at risk. The CMS itself is unaffected and continues to thrive. If you want to read more about the current discussions and disagreements impacting WP leadership, you can read articles such as this and this to gain more insight.

Other Options?

Still skeptical about WordPress? OK, what are your other options?

While other CMS platforms can perhaps be a good alternative to a “standard informational site”, or a site that has no customization requirements at all from the normal out-of-the-box functionality (which eventually happens for every site that starts to scale in any way), you have to make very informed decisions about whether it’s actually worth it – and what the problem is that you believe you’re solving.

Some questions I’d ask myself:

  • Is the CMS open source? If not, what do I own?
  • Is there a strong form of community and support?
  • Does it play nicely with third-party connections and APIs you intend to use?
  • Can you scale the site in the way you want?
  • Can you truly control output on the front end?
  • Does it adhere to SEO best practices?

If any of the answers above are a “no” then you need to understand the risks of those issues before considering any migration, as you may find that the risks of the alternative outweigh anything that WordPress would.

A couple of months ago, I decided to research several open-source CMSs to see if any other platform could compete with WordPress and its capabilities. Unsurprisingly, nothing came close.

TL;DR

WordPress is here to stay and is still the CMS I’d advise in 99% of cases.

It’s safe, supported, robust, future-proof, and open source.

Whilst other CMS platforms can offer some solutions to smaller or simpler sites, I am yet to be truly convinced that they pose any considerable risk to their future or their role in the future of websites.

More resources: 


Featured Image: Krakenimages.com/Shutterstock

Smart Bidding In Google Ads: In-Depth Guide via @sejournal, @brookeosmundson

Imagine running campaigns that adjust bids perfectly for every auction, targeting the right user at the right moment.

That’s the promise of Smart Bidding in Google Ads.

For PPC marketers, especially for beginners, Smart Bidding can feel like an enticing but sometimes overwhelming tool.

Between algorithm updates, new automation options, and ever-changing PPC best practices, it’s easy to lose sight of how to maximize its potential.

In this guide, we’ll explore what Smart Bidding is, how it works today, and the actionable strategies you can use to get the best results. Whether you’re new to automation or looking to fine-tune your approach, this article is here to help.

What Is Smart Bidding?

Per Google’s definition:

“Smart Bidding refers to bid strategies that use Google AI to optimize for conversions or conversion value in each and every auction.”

Unlike manual or rules-based bidding, Smart Bidding uses data signals – like device type, time of day, location, and even user intent – to determine the optimal bid for each auction.

Some of the key Smart Bidding strategies include:

  • Target Cost Per Acquisition (CPA): Sets bids to help you get as many conversions as possible at your target cost per acquisition.
  • Target Return on Ad Spend (ROAS): Focuses on maximizing conversion value at your desired return.
  • Maximize Conversions: Aims to get the highest number of conversions within your budget.
  • Maximize Conversion Value: Optimizes for the highest total conversion value, perfect for campaigns with varied transaction amounts.

These strategies are invaluable for streamlining campaign management, saving time, and improving results.

However, they work best when paired with a clear strategy and enough data points to make sound decisions.

When Should You Use Smart Bidding?

Smart Bidding isn’t a one-size-fits-all solution. Choosing the right strategy depends on your campaign goals, audience, and available data.

Here’s when each strategy shines, along with real-world examples to help you decide:

Target CPA

Target CPA is perfect for campaigns where controlling the cost per lead or conversion is crucial, such as lead generation.

For example, a SaaS company running a campaign to drive free trial signups wants to maintain a $50 CPA.

By setting this target, Smart Bidding adjusts bids to focus on leads that are more likely to convert within that range, while ignoring auctions where conversion costs might exceed that goal.

Target ROAS

This Smart Bidding strategy is ideal for campaigns where profitability matters more than the number of conversions. Typically, most ecommerce businesses would opt for a ROAS strategy.

For example, say an online retailer selling high-end electronics has a goal to maintain a 400% ROAS (four times return on every dollar spent).

Using Target ROAS, the algorithm prioritizes auctions for users likely to generate higher-value purchases, such as customers buying laptops, while de-emphasizing bids for lower-margin items like accessories.

Maximize Conversions

Try using this Smart Bidding strategy when you have a set budget and want to maximize the total number of conversions, regardless of cost per conversion.

It’s especially effective for brand awareness or expanding into new markets.

For example, say, a non-profit organization aims to maximize email signups for a new awareness campaign.

Since the focus is on volume rather than cost efficiency, Maximize Conversions helps them get the most signups possible within their budget.

Maximize Conversion Value

This strategy is best for campaigns with varied transaction values, where the goal is to optimize for total revenue or high-value actions.

For example, a luxury travel agency advertises vacation packages ranging from $5,000 to $20,000.

By using Maximize Conversion Value, the campaign prioritizes auctions for customers likely to book premium packages, even if they cost more to acquire, rather than focusing on smaller bookings.

Common Pitfalls Of Smart Bidding

Smart Bidding is a powerful tool, but it’s not immune to challenges. Understanding potential pitfalls can help you avoid costly mistakes.

1. Insufficient Or Incorrect Data

Smart Bidding relies heavily on historical data to optimize bids. Campaigns with low conversion volume or incomplete tracking often confuse the algorithm, leading to poor performance.

For example, if you have a campaign that only gets 10 conversions in the past 30 days, it may not be best to go all in on Target ROAS or Target CPA strategies until it gathers more data.

With only a handful of conversions every month, the algorithm lacks enough data to predict future outcomes, resulting in missed opportunities or over-aggressive bidding.

For new campaigns, consider using Maximize Clicks first to gather enough traffic to your website, allowing the algorithm to learn faster and gain more historical data.

2. Misaligned Goals

Using the wrong bidding strategy for your campaign objectives is the easiest way to derail your campaign.

For instance, Target CPA may not be suitable if profitability (ROAS) is your primary goal.

In this hypothetical example, say a retailer mistakenly applies Target CPA to a holiday campaign, aiming for a $20 CPA, even though their products have a $200 average transaction value.

That strategy drives volume, but at the expense of profitability.

Make sure to clearly define your campaign’s primary objective (lead generation, revenue maximization, etc.) and choose a Smart Bidding strategy that aligns with it.

3. Overlooking The Learning Phase

Every Smart Bidding strategy has a learning phase where performance may fluctuate as the algorithm adjusts.

Making changes too soon can reset the process and waste budget.

Say you just launched a campaign with a Target CPA strategy, only to switch it to Maximize Conversions just one week later due to inconsistent results.

This prevents the algorithm from stabilizing and optimizing for long-term success.

Allow one to two weeks (or longer for low-volume campaigns) for the learning phase to complete. Monitor performance, but avoid major changes during this period.

4. Ignoring External Factors

While Smart Bidding is highly adaptive, it can’t predict seasonal trends, promotions, or external market shifts without proper input.

Make sure to use Google’s seasonality adjustment tool to account for temporary shifts in user behavior during sales or promotions, or even national events that could change a user’s online behavior.

5. Underutilizing Advanced Features

Many advertisers set up Smart Bidding, but fail to use advanced options like bid simulators, audience layering, or custom conversion values.

This limits their ability to optimize performance.

Try testing out some of these additional campaign or ad group layers to understand the potential outcomes, and use audience insights to refine targeting.

Best Practices For Smart Bidding Success

Smart Bidding can be a game-changer in the results of your campaigns, but it’s not a magic wand.

To get the most out of this powerful tool, you need to pair automation with thoughtful planning and regular oversight.

By following these tried-and-true best practices, you’ll not only improve campaign performance but also avoid the common pitfalls that trip up many advertisers.

1. Feed The Algorithm With Clean, Accurate Data

Conversion tracking is the backbone of Smart Bidding. Errors in tracking or unverified conversions can lead to misguided optimizations.

When fed with clean and accurate data, the algorithm has the best chance to produce fruitful results.

But when fed with inaccurate data points, your Smart Bidding strategy will wreak havoc on your performance.

Garbage in, garbage out.

Be sure to regularly audit your conversion tracking setup. Ensure every key action (purchases, form submissions, calls, etc.) is tracked accurately and attributed correctly.

For ecommerce campaigns, make sure to include transaction values to correctly use Maximize Conversion Value or Target ROAS strategies.

2. Set Realistic Goals

Unrealistic CPA or ROAS targets can choke the algorithm, resulting in limited impressions or poor bid adjustments.

If you’re not sure what to set your campaign targets at, review historical campaign datasets to set achievable targets.

For example, if your average CPA is $50, don’t set a Target CPA of $20 right away. Start closer to your historical average and adjust gradually.

This also pertains to your daily budget. If your daily budget is only $50 but your average CPA target is $50, this will severely limit ad serving because it’s holding back finding the user most likely to convert.

3. Layer Audiences And Signals

While Smart Bidding works on its own, adding audience segments or demographic layers can give the algorithm more context.

Try using remarketing lists, in-market audiences, and customer match data to guide Smart Bidding towards higher-value users.

You can add audience segments as “Observation Only” to start with if you don’t want to narrow on those users specifically yet.

Depending on their performance, you can always adjust your bids up or down, or even exclude them altogether.

4. Leverage Seasonality Adjustments

Google’s seasonality adjustment feature lets you signal to the algorithm about anticipated spikes or dips in demand.

Before a major sale or holiday, input a seasonality adjustment to help the algorithm prepare for the surge in conversions.

Additionally, make sure to increase your daily budgets to account for those holiday surges.

5. Monitor Performance With The Right Metrics

Don’t rely solely on Google Ads’ automated suggestions and insights.

Do your due diligence and analyze auction insights, search impression share, and audience performance to identify trends and areas for improvement.

6. Run Experiments To Validate Strategies

Testing is critical to understanding what works.

Google Ads Experiments allows you to split test Smart Bidding strategies without risking your entire budget.

For example, say you’ve been running a campaign on Maximize Conversions, but are looking to narrow in on a specific CPA target.

You can set up an experiment to test a Target CPA strategy against the Maximize Conversions to see what performs better for your goals.

That way, you’re not dramatically shifting the behavior of the account overnight and introducing a lot of volatility into performance.

The Bottom Line On Smart Bidding

Smart Bidding in Google Ads has evolved to become an indispensable tool for PPC marketers.

Its ability to leverage machine learning and real-time data is unmatched, but like any tool, its success depends on how you use it.

By aligning your strategy with your goals, feeding the algorithm accurate data, and monitoring performance regularly, you can unlock its full potential.

Remember, automation doesn’t mean you’re off the hook – it means you have more time to focus on strategy, creativity, and scaling your campaigns.

With the right approach, Smart Bidding isn’t just smart – it’s transformational.

More Resources:


Featured Image: dee karen/Shutterstock