Shopify CEO’s Memo Marks A Pivotal Moment For AI In The Workplace via @sejournal, @martinibuster

A memo by Shopify’s CEO Tobi Lütke sets a company-wide expectation for the use of AI not just throughout the company but also encourages employees to think about how their end users can use AI. Everyone needs to read this because it marks a pivotal moment in how everyone should be using AI to hundredfold increase what they can accomplish and to visualize how AI can be employed for end users as well.

The internal memo details a company-wide reflexive AI usage strategy, which means using AI as a matter of course. It sets the stage for reshaping how merchants use Shopify and points toward a future where entrepreneurship on Shopify is AI-native by design. The memo signals how AI is swiftly becoming central to how all businesses will operate, especially yours.

Reflexive Use Of AI

The heart of the memo is the CEOs encouragement of discovering how AI can be applied to every aspect of how work gets done internally, citing his own usage of AI and how he feels he’s only scratching the surface of how it can be integrated into his own workflow. He asks all employees to “tinker” with AI and encourage company-wide adoption so that the usage of AI becomes reflexive.

His use of the word reflexive is important because it means doing something without consciously thinking about it. The express meaning then is that he really wants AI everywhere and the reason for that is because AI has the ability to boost productivity not just ten times but a hundredfold.

Tobias advocates for the transformational qualities of AI as a productivity multiplier, citing the reflexive use of it for unlocking exponential gains in what can be accomplished at Shopify.

He wrote:

“We are all lucky to work with some amazing colleagues, the kind who contribute 10X of what was previously thought possible. It’s my favorite thing about this company. And what’s even more amazing is that, for the first time, we see the tools become 10X themselves.

I’ve seen many of these people approach implausible tasks, ones we wouldn’t even have chosen to tackle before, with reflexive and brilliant usage of AI to get 100X the work done.”

Workplace Expectations and Requirements

What’s important about the Lütke memo is that it sets expectations about the use of AI in the workplace in a way that should serve as an inspiration for how all workplaces may consider following as well.

Using AI effectively is now a fundamental expectation of all Shopify employees and it will be factored into the peer and performance review questionnaires. Employees will be mandated to demonstrate why AI cannot be used to accomplish goals before asking for more resources. The expectations for AI usage is not just about software engineers, it applies to all employees, including all the way to the top at the executive management level.

AI At Every Workflow Step

The memo sets the expectation that AI must be involved during the GSD (Get Sh*t Done) prototype phase and at a “fraction of the time it used to take.” Teams are also encouraged to envision their projects as if AI were also a part of the team.

He writes:

“What would this area look like if autonomous AI agents were already part of the team? This question can lead to really fun discussions and projects.”

And elsewhere:

“In my On Leadership memo years ago, I described Shopify as a red queen race based on the Alice in Wonderland story—you have to keep running just to stay still. In a company growing 20-40% year over year, you must improve by at least that every year just to re-qualify. This goes for me as well as everyone else.

This sounds daunting, but given the nature of the tools, this doesn’t even sound terribly ambitious to me anymore. It’s also exactly the kind of environment that our top performers tell us they want. Learning together, surrounded by people who also are on their own journey of personal growth and working on worthwhile, meaningful, and hard problems is precisely the environment Shopify was created to provide. This represents both an opportunity and a requirement, deeply connected to our core values of Be a Constant Learner and Thrive on Change. These aren’t just aspirational phrases—they’re fundamental expectations that come with being a part of this world-class team. This is what we founders wanted, and this is what we built.”

Learning, Collaboration, and Community

The other exciting part of Lütke’s memo for AI usage in the workplace is that he encourages employees to share their discoveries and breakthroughs with each other so that all employees can benefit from new and creative ways of getting things done with AI, to share all of their wins with each other.

“We’ll learn and adapt together as a team. We’ll be sharing Ws (and Ls!) with each other as we experiment with new AI capabilities, and we’ll dedicate time to AI integration in our monthly business reviews and product development cycles. Slack and Vault have lots of places where people share prompts that they developed, like #revenue-ai-use-cases and #ai-centaurs.”

Takeaways

Lütke’s memo shows how AI is radically changing the workplace at Shopify and how it can spread across every workforce, including your own.

Shopify is envisioning the next stage of ecommerce entrepreneurship, AI-everything, where AI is an ubiquitous presence for merchants. This is an example of the kind of leadership all entrepreneurs and small businesses should have, to start thinking of how they can integrate AI for themselves and their customers instead of lowering the window blinds to spy across the street to see what competitors are doing.

Read the entire memo:

Featured Image by Shutterstock/TarikVision

Google Chrome Adds New Tools For Better Mobile Testing via @sejournal, @MattGSouthern

Chrome has added new DevTools features that help developers test website performance based on real-world data.

Available in Chrome 134, these tools include CPU throttling calibration and other improvements that help bridge the gap between development environments and actual experiences.

How This Helps

Developers build websites on powerful desktop computers. However, many users visit these sites on much slower mobile devices.

This creates a problem: performance issues may not show up during testing.

Chrome DevTools has offered CPU throttling for years, letting developers simulate slower devices. But choosing the right throttling level has been mostly guesswork.

This update is designed to eliminate the guesswork.

CPU Throttling Calibration

The main new feature in Chrome 134 is CPU throttling calibration. It creates testing presets specifically for your development machine.

After a quick test, DevTools creates two options:

  • Low-tier mobile” – Mimics very basic devices
  • Mid-tier mobile” – Matches average mobile device speed
Screenshot from: developer.chrome.com/blog/devtools-grounded-real-world, April 2025.

Brendan Kenny states in the Chrome Developers Blog:

“We generally recommend the ‘mid-tier’ preset for most testing. If many of your users have even slower devices, the ‘low-tier’ option can help catch issues affecting those users.”

Setting up calibration is easy:

  • Open the Performance panel’s Environment settings
  • Select “Calibrate…” from the CPU throttling dropdown
  • Let DevTools run a quick test
  • Start using your new calibrated presets
Screenshot from: developer.chrome.com/blog/devtools-grounded-real-world, April 2025.

What Throttling Can & Can’t Do

The new calibration makes testing more accurate, but it has limits.

Throttling works by pausing the browser tab to make tasks take longer. This method is useful for simulating JavaScript and layout calculations.

Tests show that calibrated throttling closely matches how these processes run on real mobile devices.

However, CPU throttling doesn’t accurately simulate:

  • Graphics-heavy operations
  • Slower storage speeds
  • Limited memory
  • Device heating issues

Chrome’s testing showed that visually complex pages could take twice as long on real mobile devices compared to simulated tests.

This means you should still test on real devices, especially for visually rich websites.

Real-World Data Integration

Besides CPU calibration, Chrome 134 adds several features that use real-world performance data:

  • Throttling suggestions based on your actual site visitors
  • Alerts when your test results don’t match real-user experiences
  • Performance insights that flag mismatches between tests and reality
  • Smarter organization of performance tips based on your users’ actual needs
  • Better tracking of what settings were used for each test

These features help ensure your testing matches what users experience rather than artificial lab conditions.

Why It Matters For SEO & Marketing

These new tools solve a disconnect between websites that work well in development but struggle on real devices.

Chrome 134 helps ensure your performance improvements benefit users by providing more realistic testing conditions.

As mobile continues to dominate web traffic, these tools provide a better foundation for improving user experience, conversion rates, and search rankings.

Google Maps Gets An Upgrade To Combat Fake Reviews via @sejournal, @MattGSouthern

Google has updated its AI systems for Maps and Business Profiles, which now use Gemini to identify risky profile edits and fake reviews.

Gemini Finds Suspicious Profile Edits

Google is employing Gemini to spot fake changes to Business Profiles.

It can distinguish between a regular update, like a slight name change, and a sudden, suspicious shift, such as changing a business category from a “cafe” to a “plumber.”

In its announcement, Google said:

“We trained a new model with the help of Gemini that identifies potentially suspicious profile edits. A business that changes its name from ‘Zoe’s Coffee House’ to ‘Zoe’s Cafe’ isn’t suspicious—but a business that suddenly changes its category from ‘cafe’ to ‘plumber’ is.”

Google says this new system has blocked thousands of risky edits this year.

New Tools to Stop Fake Five-Star Reviews

Google will use Gemini to spot fake five-star reviews by tracking reviews over time. This allows the system to find new signs of abuse, even after the review is posted.

The company has launched alerts in the US, UK, and India. These alerts warn users when suspicious five-star reviews have been removed. Google plans to roll out the alerts worldwide next month.

See an example of the alerts below.

2024 in Numbers: Content Moderation

Google shared strong numbers from its work in 2024:

  • Over 240 million policy-violating reviews were blocked or removed before many people saw them.
  • More than 70 million risky edits to Maps listings were stopped.
  • Over 12 million fake Business Profiles were removed or blocked.
  • Posting was restricted on over 900,000 accounts that broke the rules repeatedly.

What This Means for SEO and Local Marketers

For SEO specialists and local marketing professionals, these updates underline the need for honest review strategies and careful Business Profile management.

As Google’s AI improves, tricks like fake reviews and unauthorized profile changes are easier to catch. Companies using shady tactics will face steeper penalties, while those focusing on genuine customer engagement will gain more trust from Google.

Best Practices for Local SEO

Given these advancements, local SEO professionals should:

  1.  Ensure client review practices follow Google’s rules.
  2. Ensure all Business Profiles are correctly claimed and managed.
  3. Monitor profile changes and review patterns.
  4. Focus on getting honest customer feedback.
  5. Use Google’s tools to report any suspicious activities by competitors.

Looking Ahead

Google plans to keep improving its systems. The company stated it will “keep working on the front lines and behind the scenes to keep content on Google Maps helpful and reliable.”

More details are available in its Content Trust and Safety Report.

Kinsta WordPress Updater Prevents Failed Plugin Updates via @sejournal, @martinibuster

WordPress hosting provider Kinsta announced an automated plugin updater that detects and recovers from bad updates by rolling back the plugin to its previous state and preventing downtime from affecting website performance. Failed plugin updates are prevented from going live and publishers are immediately notified.

Kinsta shared that a scan of users indicated that the average WordPress installation has 21 active WordPress plugins, suggesting that the average WordPress site is becoming increasingly complex.

That kind of plugin usage means that time spent updating and troubleshooting issues can take up a greater amount of time. Plugins don’t always function well with each other which can lead to updating issues. Kinsta’s new Automatic Updates solves that issue by completely automating plugin updates which will assure that all plugins are up to date.

Keeping WordPress Plugins Updated Is A Security Issue

Outdated plugins can quickly escalate into a nightmare scenario due to vulnerabilities which in turn can have a profound negative effect on search performance. An effective plan for updating plugin is essential for every WordPress-powered website.

According to Kinsta:

“Nothing confirms the need for automatic updates like finding plugins and themes that are not just out of date but also dangerously vulnerable to security breaches”

Advanced Configuration Options

The new plugin updater enables users to choose update days and time windows and can choose custom URLs for testing. False positives can be reduced by hiding dynamic elements. Sensitivity settings allow users to be able to set how strictly visual differences are flagged, further decreasing false positives.

All plugin updates are logged and can be reviewed by users, including before and after screenshots. Users can be emailed for both successful and unsuccessful updates.

The new service costs $3/month for each environment where the service is active, with zero limits to the amount of managed plugins and themes that are monitored.

Read more at Kinsta:

Kinsta Automatic Updates: Hands-free WordPress plugin and theme management

Featured Image by Shutterstock/Krakenimages.com

Studies Reveal Consumers Easily Detect AI-Generated Content via @sejournal, @MattGSouthern

Two new studies reveal that most consumers can easily spot AI-generated content, both images and text, which may be more than marketers expected.

The results suggest that brands should be careful when using AI in their marketing materials.

Consumers Identify AI-Generated Images

A study by digital marketing consultant Joe Youngblood found that U.S. consumers correctly spotted AI images 71.63% of the time when shown real photos side-by-side with AI versions.

The study surveyed over 4,000 Americans of different ages.

Youngblood states:

“When asking them to determine which photo was real and which one was AI, over 70% of consumers on average could correctly select the AI generated image,”

Detection rates varied by type of image:

  • Celebrity images (Scarlett Johansson as Black Widow): 88.78% identified correctly
  • Natural landscapes (Italian countryside): 88.46% identified correctly
  • Animal photos (baby peacock): 87.97% identified correctly
  • Space images (Jupiter): 83.58% identified correctly

However, some images were more challenging to detect. Only 18.05% correctly spotted an AI version of the Eiffel Tower, and 50.89% identified an AI-created painting of George Washington.

Similar Skepticism Toward AI-Written Content

A separate report by Hookline& surveyed 1,000 Americans about AI-written content.

Key findings include:

  • 82.1% of respondents can spot AI-written content at least some of the time.
  • Among those aged 22–34, the rate rises to 88.4%.
  • Only 11.6% of young people said they never notice AI content.

Christopher Walsh Sinka, CEO of Hookline&, stated:

“Writers and brands aren’t sneaking AI-generated content past readers.”

Reputational Risks for Brands and Writers

Both studies point to the risks of using AI in content.

From the image study, Youngblood warned,

“If consumers determine that AI images are poor quality or a bad fit they may hold that against your brand/product/services.”

The content study showed:

  • 50.1% of respondents would think less of writers who use AI.
  • 40.4% would view brands more negatively if they used AI-generated content.
  • Only 10.1% would view the brands more favorably.

Older consumers (ages 45–65) were the most critical. Nearly 30% said they did not like AI-written content.

Acceptable Use Cases for AI

Despite the caution, both studies indicate that some uses of AI are acceptable to consumers.

The content report found that many respondents approved of using AI for:

  • Brainstorming ideas (53.7%)
  • Conducting research (55.8%)
  • Editing content (50.8%)
  • Data analysis (50.1%)

In the image study, Youngblood noted that consumers might accept AI for fun and informal uses such as memes, video game sprites, cartoons, and diagrams.

However, for important decisions, they prefer real images.

What This Means

These studies offer guidance for those considering incorporating AI-generated content in marketing material:

  1. Be Transparent: Since many consumers can spot AI-generated content, honesty about its use may help maintain trust.
  2. Focus on Quality: Both studies suggest that genuine, professionally produced content is seen as more reliable.
  3. Use AI Wisely: Save AI for tasks like research and editing, but let people handle creative decisions.
  4. Know Your Audience: Younger consumers may be more accepting of AI than older groups. Tailor your strategy accordingly.

Future marketing campaigns should consider how well consumers can detect AI content and adjust their strategies to maintain trust and credibility.

Google DeepMind’s AGI Plan: What Marketers Need to Know via @sejournal, @MattGSouthern

Google DeepMind has shared its plan to make artificial general intelligence (AGI) safer.

The report, titled “An Approach to Technical AGI Safety and Security,” explains how to stop harmful AI uses while amplifying its benefits.

Though highly technical, its ideas could soon affect the AI tools that power search, content creation, and other marketing technologies.

Google’s AGI Timeline

DeepMind believes AGI may be ready by 2030. They expect AI to work at levels that surpass human performance.

The research explains that improvements will happen gradually rather than in dramatic leaps. For marketers, new AI tools will steadily become more powerful, giving businesses time to adjust their strategies.

The report reads:

“We are highly uncertain about the timelines until powerful AI systems are developed, but crucially, we find it plausible that they will be developed by 2030.”

Two Key Focus Areas: Preventing Misuse and Misalignment

The report focuses on two main goals:

  • Stopping Misuse: Google wants to block bad actors from using powerful AI. Systems will be designed to detect and stop harmful activities.
  • Stopping Misalignment: Google also aims to ensure that AI systems follow people’s wishes instead of acting independently.

These measures mean that future AI tools in marketing will likely include built-in safety checks while still working as intended.

How This May Affect Marketing Technology

Model-Level Controls

DeepMind plans to limit certain AI features to prevent misuse.

Techniques like capability suppression ensure that an AI system willingly withholds dangerous functions.

The report also discusses harmlessness post-training, which means the system is trained to ignore requests it sees as harmful.

These steps imply that AI-powered content tools and automation systems will have strong ethical filters. For example, a content generator might refuse to produce misleading or dangerous material, even if pushed by external prompts.

System-Level Protections

Access to the most advanced AI functions may be tightly controlled. Google could restrict certain features to trusted users and use monitoring to block unsafe actions.

The report states:

“Models with dangerous capabilities can be restricted to vetted user groups and use cases, reducing the surface area of dangerous capabilities that an actor can attempt to inappropriately access.”

This means that enterprise tools might offer broader features for trusted partners, while consumer-facing tools will come with extra safety layers.

Potential Impact On Specific Marketing Areas

Search & SEO

Google’s improved safety measures could change how search engines work. New search algorithms might better understand user intent and trust quality content that aligns with core human values.

Content Creation Tools

Advanced AI content generators will offer smarter output with built-in safety rules. Marketers might need to set their instructions so that AI can produce accurate and safe content.

Advertising & Personalization

As AI gets more capable, the next generation of ad tech could offer improved targeting and personalization. However, strict safety checks may limit how much the system can push persuasion techniques.

Looking Ahead

Google DeepMind’s roadmap shows a commitment to advancing AI while making it safe.

For digital marketers, this means the future will bring powerful AI tools with built-in safety measures.

By understanding these safety plans, you can better plan for a future where AI works quickly, safely, and in tune with business values.


Featured Image: Shutterstock/Iljanaresvara Studio

Google Explains SEO Impact Of Adding New Topics via @sejournal, @martinibuster

Google’s Danny Sullivan discussed what happens when a website begins publishing content on a topic that’s different from the one in which it had gained a sitewide reputation. His comments were made at Search Central Live NYC, as part of a wide-ranging discussion about site reputation.

Danny said that introducing a new topic to a website won’t result in the site taking a hit in rankings. But what could happen is that Google might try to figure out how that content fits into the rest of the site.

Here’s what Danny said:

“We have long done work and are going to continue doing that to understand if parts of the site seem to be independent or starkly different than other parts of the site. It is not bad to have a website do whatever you want the website to do for your readers. It’s not bad that you started off covering one thing and you start writing about something else.

I had one person at an event who was very, very concerned. They started writing about snowboards but now wanted to start writing about skis and was terrified.

That if they write about skiing that somehow the topic of the website and the focus will somehow… it doesn’t work that way.

We’re not kind of building it up on the expertise you have in this particular thing, that type of thing, but what we are trying to understand is if the site seems to be different in some way from other parts of the site.”

It Doesn’t Work That Way

What Danny is saying is that Google looks at how different one part of a site is from another. And if another part is vastly different, he went on to say that sometimes it may rank well for a time based on the reputation of the entire site for the main topic but then the new section may lose rankings.

Danny explained that the loss in rankings is not a penalty but rather it’s just a recognition that a section of a site is so vastly different that the reputation of the entire site doesn’t really apply for that particular topic.

Danny used the metaphor of a “mini-site” to explain how Google might split off the reputation of a new section of a site from the rest of the site so that it can earn reputation for its topic. More about the mini-site metaphor here.

It makes sense that Google would differentiate the different parts of a site because it allows it to understand that a collection of pages is on one topic and another collection of pages within the website are about a different topic.

Featured Image by Shutterstock/Rene Jansa

Google On Negative Authorship Signal And Mini-Site Reputation via @sejournal, @martinibuster

At the recent Search Central Live NYC event, Danny Sullivan discussed what happens when a site begins publishing vastly different content on a site and how that may affect rankings, introducing the concept of a mini-site as a metaphor for dividing the reputation of a site. He also discussed the concept of negative authorship authority, which some SEOs believe follows authors from penalized websites and can negatively affect the other sites they publish on.

Negative Authorship Reputation

Danny initially discussed a negative authorship signal that some in the SEO community believe can follow an author from site to site. The idea is that an author of content that is banned on one site will also have their content banned on another site. He denied that Google tracked author authority signals from site to site.

Sullivan explained:

“If you wrote for a site that got a manual action, it doesn’t somehow infect the other site that you might work for later on, so again, this is not something that freelancers should be worried about.

If you’re a publication and for whatever reason you feel like employing a freelancer, and it makes sense, that’s fine. You don’t need to worry about who they worked for before.

And if you are a freelancer you do not need to go back to the publications and say, can you take my byline down because now I can’t get hired from anybody else because they think I’m going to infect them. It is not like that. It’s not a disease. “

The above SEO myth likely began when publishers noticed that content created by a certain author across multiple sites was banned. In that case, it’s reasonable to assume that there was something wrong with the content but that’s not necessarily true. It could have been the case that the website itself was poorly promoted with unnatural links. Or, it could be that the site itself was engaged with selling links.

The takeaway from what Danny Sullivan shared is that a manual action on one site doesn’t follow an author to another site. Another takeaway is that there is no negative authorship signal that Google is tracking.

And if there’s no negative authorship signal could it be that there is no positive author signal as well? In my opinion it’s a reasonable assumption. A signal like that would be too easy to manipulate. Whatever signals Google uses to understand site reputation is likely enough for the purpose of citing an information source in the search results.

Although claims by some SEOs have been made about authorship signals, authorship signals have never been known to be a thing with Google’s algorithms. Google has a long history of denying the use of authorship signals and Danny’s statements offer further validation that Google continues to not use authorship as signals for ranking purposes.

Ranking Drops And Mini-Site Reputation

Danny next begins talking about how a new section of a site could suddenly lose rankings. He says this isn’t necessarily a bad thing. It’s just Google trying to figure out this new section and that if it’s sufficiently different Google could even start treating it as a standalone mini-site. This is a really fascinating thing he gets into here.

Danny used the example of the addition of a forum to a website.

Danny explained

“For example, you might have a site where you start running a forum. Forums can be different and we would want to understand that this looks like a forum so that we can then rank the forum content against other kinds of forum content on kind of a level playing field or understand that that forum content should be included in things where we try to show forum content.”

What can happen is… that it could be that part of your site was doing better because it was seen as part of the overall site. Now we kind of see it as more of independent and part of a full site on its own.

And potentially you could see a traffic drop that comes from that. That doesn’t mean that you suddenly got a site reputation abuse ban issue because first of all that might not have involved third party content abusing first party work, right? Those were the things. So if it doesn’t have any of that it doesn’t have anything to do with that.Secondly, we would have sent you an email. So, it’s not bad.

Because it just could be we’ve had a general re-ranking… It could also mean that in the long run that part of your site might actually do better, because we might recognize it in different ways, that we might be able to surface it in different ways. And it might start sort of earning its own like ‘mini-site’ reputation along the way.”

Three things to take away from that last part.

A ranking drop could be due to benign things, don’t always assume that a ranking drop is due to a spam or other negative algorithmic action.

Second, a rankings drop could be due to a “general re-ranking” which is a vague term that went unexplained but is probably a reference to minor ranking adjustments outside of a core algorithm update.

The third takeaway is the part about a section of a website earning it’s own “mini-site” reputation. I think SEOs should not create theories about mini-sites and mini-site reputations because that’s not what Danny Sullivan said. He used the word “like” which means that he likely used the phrase “mini-site” as a metaphor.

Featured Image by Shutterstock/Joseph Hendrickson