OpenAI Releases GPT-5.1 With Improved Instruction Following via @sejournal, @MattGSouthern

OpenAI released GPT-5.1 Instant and GPT-5.1 Thinking with updates to conversational style and reasoning capabilities.

The updates begin rolling out today to paid users before expanding to free accounts.

OpenAI says this release addresses feedback from users who want AI that feels more natural to interact with, while also improving intelligence.

What’s New

GPT-5.1 Instant

GPT-5.1 Instant, ChatGPT’s most-used model, now defaults to a warmer, more conversational tone.

OpenAI reports improved instruction following, with the model more reliably answering the specific question asked rather than drifting into tangents.

GPT-5.1 Instant can use adaptive reasoning. The model decides when to think before responding to challenging questions, producing more thorough answers while maintaining speed.

GPT-5.1 Thinking

The advanced reasoning model adapts thinking time more precisely. On a representative distribution of ChatGPT tasks, GPT-5.1 Thinking runs roughly twice as fast on the fastest tasks and roughly twice as slow on the slowest tasks compared to GPT-5 Thinking.

Responses use less jargon and fewer undefined terms, which OpenAI says should make the most capable model more approachable for complex workplace tasks and explaining technical concepts.

Customization Options

OpenAI refined personality presets to better reflect common usage patterns. Default, Friendly (formerly Listener), and Efficient (formerly Robot) remain with updates, and new options include Professional, Candid, and Quirky.

These presets apply across all models. The original Cynical (formerly Cynic) and Nerdy (formerly Nerd) options remain available under the same personalization menu.

Beyond presets, OpenAI is experimenting with controls that let you tune specific characteristics such as response conciseness, warmth, scannability, and emoji frequency from personalization settings.

Personalization changes now take effect across all chats immediately, including ongoing conversations. Previously, changes only applied to new conversations started afterward.

The updated GPT-5.1 models also adhere more closely to custom instructions, giving you more precise tone and behavior control.

Rollout Timeline

GPT-5.1 Instant and Thinking begin rolling out today starting with paid subscribers. Free and logged-out users will get access afterward.

Enterprise and Education customers get a seven-day early access toggle to GPT-5.1 (off by default). After that window, GPT-5.1 becomes the default ChatGPT model.

GPT-5 (Instant and Thinking) remains available in the legacy models dropdown for paid subscribers for three months, giving people time to compare and adapt.

Why This Matters

GPT-5.1 can change how your day-to-day workflows behave. Better instruction following means less prompt tweaking and fewer off-brief outputs.

Adaptive reasoning may make simple tasks feel faster while giving more complex work, like technical explanations or data analysis, extra time.

Looking Ahead

OpenAI frames this update as a step toward personalized AI that adapts to individual preferences and tasks.

Updated personality styles and tone options roll out today. Granular characteristic tuning will roll out later this week as an experiment to a limited number of users, with further changes based on feedback.


Featured Image: Photo Agency/Shutterstock

Google Launches Ads Advisor & Analytics Advisor Globally via @sejournal, @MattGSouthern

Google is introducing two AI-powered tools built on Gemini: Ads Advisor in Google Ads and Analytics Advisor in Google Analytics.

Availability starts in early December for English-language accounts globally.

What’s New

Ads Advisor

Ads Advisor works inside your Google Ads account to surface optimization recommendations and, with your review and approval, apply the selected changes.

In Performance Max, you can ask how to prepare for a seasonal moment, such as a holiday. It may recommend adding sitelink extensions, then implement that update after you approve it.

Here is an example provided by Google:

Image Credit: Google

It can generate keywords, headlines, and ad copy from your site and campaign assets. It can also help diagnose performance shifts by answering diagnostic questions, identifying likely causes, and proposing fixes.

For policy issues, it can flag the violation and, in some cases, take an action such as editing an ad URL for you to approve.

Analytics Advisor

Analytics Advisor is a conversational experience in Google Analytics Standard and Analytics 360.

You can ask broad questions or request specific metrics, and it will generate visualizations and insights.

Here’s an example response provided by Google, after asking what’s the best opportunity to convert or re-engage users from a traffic spike:

Image Credit: Google

When metrics spike or drop, it performs key-driver analysis to explain what likely caused the change and prioritizes the factors that matter to business outcomes. It then connects those insights to growth opportunities with step-by-step guidance.

Why This Matters

Ads Advisor can help you move from suggestions to permissioned, in-product execution for Ads. Meanwhile, Analytics now offers quicker investigations and clearer next steps through key-driver analysis.

During busy times like holidays, this can help you save time by reducing the need to switch between diagnostics and implementation, all while keeping you in control of what gets changed.

Looking Ahead

Google says both advisors will appear in English-language Google Ads and Google Analytics accounts globally in early December. You will access them by logging into your accounts and using the in-product advisor interfaces.

Why Web Hosting Is A Critical Factor To Maximize SEO Results via @sejournal, @MattGSouthern

Most SEO professionals obsess over content, links, and technical implementations. We track algorithm updates and audit on-page elements with precision. But there’s one factor that determines whether all that work can deliver results.

Your web hosting controls every user’s first interaction with your site. It determines load speeds, uptime consistency, and Core Web Vitals scores before anyone reads a word you’ve written.

Here’s the reality. Your hosting provider isn’t a commodity service. It’s the infrastructure that either supports or sabotages your SEO efforts. When technical SEO fails, the problem can trace back to hosting limitations you don’t know exist.

Your Host Controls The Metrics Google Measures

Core Web Vitals are a key part of how hosting can impact SEO through slow page speeds. These metrics measure what your server infrastructure determines.

Your Largest Contentful Paint (LCP) score starts with server response time. When Google’s crawler requests your page, your host must respond, process the request, and start delivering content.

Fast servers respond in under 200 milliseconds. Slower infrastructure takes 500+ milliseconds, degrading your LCP before optimization work matters.

Research analyzing 7,718 businesses across 676 sectors found top 10 ranking positions consistently showed faster server response than competitors. Google’s algorithm recognizes and rewards infrastructure quality.

Your hosting provider controls these metrics through several factors:

  • SSD storage processes read/write operations exponentially faster than traditional hard drives.
  • HTTP/3 protocol support reduces latency by 3-7% compared to HTTP/2. [1, 2]
  • Content Delivery Networks distribute content to servers closer to users, eliminating distance delays.

Sites on infrastructure optimized for Core Web Vitals consistently achieve LCP under 2.5 seconds and INP under 200 milliseconds. These are Google’s “good” thresholds. Sites on legacy infrastructure struggle to meet these benchmarks regardless of front-end optimization.

Distance Still Matters In A Connected World

Server location introduces physical limitations that no optimization can overcome. Data travels at light speed through fiber optic cables, but distance matters. A California server serving New York users introduces approximately 70 milliseconds of latency from physical distance alone.

This affects SEO through Core Web Vitals performance. Geographic distance introduces latency that affects page load times. Sites struggle to meet Core Web Vitals thresholds when server infrastructure sits far from their primary audience, as distance contributes to performance problems that optimization alone can’t fully resolve.

The solution depends on your architecture. Shared, VPS, and dedicated hosting place your site on physical servers in specific data centers. Choose data centers close to your primary audience to reduce latency.

Cloud hosting distributes content differently. It serves content from multiple geographic points, mitigating distance penalties. But it requires careful configuration to ensure search engines can efficiently crawl your distributed content.

Uptime Affects How Often Google Crawls Your Site

Google allocates crawl budget partly based on your site’s reliability. When crawlers consistently encounter server timeouts, Google reduces crawl frequency to avoid wasting resources on unreliable infrastructure.

This creates a compounding problem.

Lower crawl frequency means new content takes longer to appear in search results. Updated pages don’t get re-indexed promptly. For sites publishing time-sensitive content or competing in fast-moving markets, hosting-related crawl delays can mean missing ranking opportunities.

Industry standard uptime guarantees of 99.9% translate to roughly 8.8 hours of downtime per year, or about 1.44 minutes daily. This sounds negligible, but timing matters. If those minutes occur when Google’s crawler attempts to access your site, you’ve lost that crawl opportunity. If they occur during peak traffic, you’ve lost conversions and sent negative signals to algorithms.

The business impact varies by industry:

  • Ecommerce sites lose immediate sales and long-term ranking potential.
  • News properties miss brief windows when content is most valuable.
  • Local businesses miss moments when potential customers search for their services.

Any host claiming 100% uptime should raise skepticism. Server maintenance, network routing issues, and data center problems ensure some downtime will occur. Select providers whose infrastructure design minimizes both frequency and duration of outages.

Modern Protocols Create Measurable Performance Advantages

Google’s Page Experience signals extend beyond Core Web Vitals to security and modern web standards. HTTPS has been a confirmed ranking factor since 2014, and its importance continues growing.

Modern hosts include free SSL certificates through services like Let’s Encrypt as standard features. Legacy providers may charge for SSL or create barriers that discourage upgrading to secure connections.

Beyond basic HTTPS, hosting infrastructure determines whether you can leverage protocols that improve performance. HTTP/2 introduced multiplexing capabilities that reduce latency. HTTP/3 further reduces latency through improved connection handling and better performance on unreliable networks.

These improvements translate to measurable Core Web Vitals gains. HTTP/3 can reduce page load times by 3-7% compared to HTTP/2, particularly for mobile users. Since mobile performance increasingly drives rankings, hosting infrastructure supporting the latest protocols provides competitive advantages.

Security extends beyond encryption to broader concerns. Hosts with modern security practices protect against DDoS attacks that cause downtime, implement rate limiting that prevents bot traffic from overwhelming your server, and maintain updated server software preventing exploitation of vulnerabilities.

Scalability Prevents Success From Becoming A Problem

One of hosting’s most overlooked SEO implications emerges when you succeed. Content goes viral. A campaign drives unexpected traffic. Your site appears on a major news outlet. Suddenly, the hosting plan adequate for normal traffic becomes a bottleneck.

Server resource limits (CPU, RAM, bandwidth) determine how many simultaneous users your site can serve before performance degrades. When your infrastructure can’t handle success, SEO consequences arrive quickly:

The worst-case scenario sees viral success damaging your organic performance. Content driving traffic performs poorly for new visitors, creating negative signals. Meanwhile, Google reduces crawl frequency across your site, delaying indexation of new content designed to capitalize on visibility.

Hosting providers offering easy scaling paths prevent this. Cloud platforms can automatically scale resources to match traffic demands. Traditional providers with multiple plan tiers allow upgrades without changing providers or migrating your site, reducing technical risk and preserving existing configuration.

Evaluating Hosts as Strategic Infrastructure

The hosting decision requires evaluating providers as infrastructure partners whose capabilities enable or constrain your SEO strategy, not as feature checklists to compare.

Before selecting hosting, audit your requirements. Geographic distribution of your target audience determines whether server location matters or CDN coverage is essential. Content publication frequency affects how much crawl consistency matters. Traffic patterns indicate whether you need spike-handling resources or steady-state capacity.

Consider these strategic factors when evaluating hosts:

  • Review network infrastructure and data center locations relative to your primary markets.
  • Verify track record on actual uptime rather than advertised guarantees.
  • Examine scaling options to ensure you can grow without migration disruption.
  • Evaluate technical support quality. 24/7 availability and demonstrated expertise matter during problems affecting organic performance.

Third-party monitoring services track real-world performance across major hosts, providing verification beyond marketing claims.

Why Infrastructure Determines Your SEO Ceiling

Web hosting functions as a multiplier on SEO efforts. Excellent hosting won’t compensate for poor content, but poor hosting can completely undermine excellent optimization work.

Think of hosting as a building’s foundation. A weak foundation limits how high you can build and how much weight the structure can support. You can create architectural marvels on that foundation, but they remain vulnerable. Similarly, you can implement sophisticated SEO strategies on inadequate infrastructure, but those strategies will consistently underperform their potential.

The most successful SEO programs recognize infrastructure as a strategic investment rather than a commodity expense. They select hosting providers whose capabilities align with performance requirements, whose geographic distribution matches their audience, and whose technical sophistication supports modern web standards and protocols.

As search algorithms increasingly emphasize user experience through metrics like Core Web Vitals, the hosting decision becomes more consequential. The gap between sites on modern infrastructure and those on legacy systems will widen. The organic visibility advantages of fast, reliable, geographically distributed hosting will compound over time as Google’s algorithm continues refining how it measures and rewards site performance.

Your hosting provider should be a strategic partner in your SEO program, not just a vendor in your technology stack. The infrastructure decisions you make today determine the ceiling on your organic performance potential for months or years to come.

Good hosting runs in the background without you thinking about it. That’s what an SEO-friendly web host should do: Enable your optimization work to deliver results rather than limiting what’s possible.

More Resources:


Featured Image: N Universe/Shutterstock

Google Launches Structured Data For Merchant Shipping Policies via @sejournal, @MattGSouthern

Google Search now supports organization-level shipping policy markup, giving ecommerce websites a code-based way to surface delivery costs and transit times in Search and knowledge panels.

When you add ShippingService structured data, Google can display shipping information next to your products.

What’s New

Google added documentation describing ShippingService, which lets you define costs and delivery windows by product weight, dimensions, order value, or destination.

A standard policy lives under Organization via the hasShippingService property; product-specific overrides use OfferShippingDetails under Offer and support a smaller set of fields.

Implementation

Google recommends placing shipping policy markup on a single page.

Each ShippingService includes one or more ShippingConditions objects that specify when rates apply. If several apply to a product, Google uses the lowest cost and shows the associated speed. Fixed fees can be set with MonetaryAmount, and percentage-based fees with ShippingRateSettings. Transit times use ServicePeriod and can include businessDays and handling cutoff times.

Destination granularity supports country codes (ISO 3166-1), optional region codes (US, Australia, Japan only), and postal codes in the US, Canada, and Australia. Don’t provide both a region and postal code for the same condition.

If you combine markup with other Google shipping configurations, Google applies an order of precedence.

For example, when both markup and Search Console shipping settings are present, Google will use the Search Console settings. Google also notes that Content API for Shopping is the strongest source in this hierarchy.

Why This Matters

This gives you a markup-only path to publish shipping policies that Search can read, which may help keep details current even before products appear in feeds. If you already manage delivery settings in Merchant Center or Search Console, you can keep doing that; just be aware those sources can override page markup when both exist.

Looking Ahead

As with other rich results, your markup must follow Google’s structured data policies, Search Essentials, and the technical guidelines in the doc for it to be eligible for use in Search.


Featured Image: New Africa/Shutterstock

How To Evaluate Creative Performance in Meta Ads (and What To Test) via @sejournal, @timothyjjensen

When running a Meta Ads campaign, creative should be at the center of your strategy. As targeting options have become more limited in the platform from the super-granular segments available in years past, overthinking targeting layers often raises cost while sacrificing performance.

Meta has also been emphasizing the importance of differentiating creative, warning that similar assets will essentially be seen as the same by their system, and potentially limiting reach as audience fatigue sets in from seeing an image or video over and over.

Assuming you’ve started with a foundation of properly configured event tracking, keeping your audience targeting high-level and focusing on testing diverse creative will generally drive the best results. Given the importance of creative, understanding how you can view the performance of your assets is crucial to informing your Meta advertising strategy.

Breaking It Down

From the main Ads Manager screen, you can use the Breakdown option to segment performance by individual ad creative (images and videos). This can be done at either the ad set or ad level.

Breaking down performance by creative in MetaImage from author, October 2025

You can then see metrics such as clicks, link clicks, impressions, cost-per-click (CPC), click-through rate (CTR), reach, frequency, and spend for individual assets. However, you unfortunately cannot see counts for custom conversions or events broken down at this level.

While the lack of conversion data is a miss on Meta’s part, you can still find value in this reporting. If you’re using dynamic creative, you can see which individual assets are being served the most and which are receiving the highest engagement (based on CTR). You can also determine if frequency is running high for particular assets and if fatigue may be setting in.

If you’re running a single image ad without dynamic creative, but using different images for various placements (e.g., square images for feed placements and vertical images for Stories/Reels), you can still see conversion performance separated by breaking down by Placement instead of creative. Here, you can also delineate between different platforms (Facebook vs. Instagram vs. Threads) for individual placement types as well.

Custom Reporting

You can also use the Ads Reporting section to create a custom report that includes individual ad creatives in addition to metrics you’d like to see. When building your report, use a pivot table and check the box for “Ad creative” if you’d like to see data aggregated across campaigns and ad sets that are using the same creative.

Meta Ads ReportingImage from author, October 2025

You’ll then see a thumbnail of the image or video, along with the accompanying copy. This report can be helpful for doing aggregate comparisons of how specific images are performing if you’re reusing them for different ad sets or campaigns. For instance, you may have campaigns segmented by market for budget reasons, but be serving the same assets.

Creative Testing

We’ve looked at how to view creative performance across your Meta campaigns in a couple of different ways, but what if you want to stage a test from the ground up to compare how different creative assets might perform? You can use the standard Experiments feature that has existed for a while in Meta and set up separate ad sets or campaigns with different ads, but let’s hone in on the Creative Testing feature that’s specifically built for this purpose.

To kick off a test, go to the edit view for an existing ad within a campaign and scroll down until you see the “Creative Testing” section. Select the option to “Set up test.”

Creative TestingImage from author, October 2025

Next, you can define the criteria for your test setup. Start by choosing to create up to five ads. Then, allocate the amount of budget that you’d like to use and define the length of the test (up to 30 days).

Creative Test SetupImage from author, October 2025

Finally, select the metric you’ll use to evaluate success. You can choose a more general goal, such as CPC or cost per result, or you can choose a specific conversion (either custom conversion or event) to look at CPA.

Finally, confirm your test, and adjust the new creatives to include the updated assets you’d like to use. You can then view results in the Experiments section, or via the campaign in Ads Manager, as the test begins to run.

According to Meta, using this approach allows you to keep learnings from the test ads within your ad set if you choose to run them moving forward. As the learning phase can be a hindrance to getting new ads off the ground, this tactic can be a benefit in addition to the ease of testing.

Measurement Outside Meta

In addition to what you can see in Meta, use tools such as Google Analytics to see performance from ad clicks to your site. As long as you’ve properly tagged your ad URLs, you can see how specific ones have driven engagement and conversions on your site.

Meta’s attribution (even click attribution) is notorious for taking all the credit it can, but using another platform can allow you to see Meta traffic alongside other sources that may also have influenced conversion. You can then determine how Meta may have influenced the buying journey, as it often may appear earlier in consideration and be followed up by sources such as search.

What Should You Test?

How can you plan creative tests that give you actionable data? As discussed at the beginning of the article, differentiation is key to both avoiding fatigue and providing significant results.

While design capabilities may vary depending on your business’s resources and bandwidth, the availability of a plethora of AI tools and user-friendly design platforms allows for simple creation of multiple assets. Think through what works best for your brand, but some potential suggestions for comparison include:

  • Illustrated vector images vs. photography.
  • Stock photography vs. your brand’s photography.
  • Animation vs. static images.
  • Text-heavy assets vs. minimal or no text.
  • Informal “phone” videos vs. polished formal videos.

From here, you can evaluate which types of assets tend to best resonate with your audience based on the data you’re able to see both in Meta and in third-party platforms. Also, be careful not to completely write off a particular type of creative because of a past test. Audience preferences can change over time, so it’s worth retesting periodically.

How Can You Apply Your Learnings?

As you evaluate the performance of various assets, you can translate the learnings into practical application in Meta and beyond. For instance, if you find that animated text videos tend to perform well, you can create them for multiple products and test different video backgrounds.

Additionally, take learnings beyond Meta and apply them to other platforms. With the caveat that each platform is unique and what works on one may not be guaranteed to perform elsewhere, images that work on Meta may be worth testing in other channels, such as Google Demand Gen and LinkedIn.

Outside of ad platforms, you can also implement image styles or videos that performed in Meta testing onto your landing pages. For instance, if your Meta audience appears to resonate more with illustrated vector imagery as opposed to photography, test that type of creative in graphics on your site.

Start Evaluating & Start Testing

If you haven’t previously been paying enough attention to the performance of specific creatives in Meta, now is a crucial time to start doing so. Ensure that you’re using different enough variants and avoiding fatigue. Think through some tests you can set up for your clients or your brand, and start creating and launching new graphics.

More Resources:


Featured Image: Viktoriia_M/Shutterstock

YouTube Debunks 24-48 Hour Upload Delay Recommendation via @sejournal, @MattGSouthern

YouTube’s Rene Ritchie says the recommendation system relies on audience behavior that only begins once a video is public.. Waiting 24–48 hours offers no benefit.

  • YouTube’s recommendation system relies on audience behavior, which starts only when a video is public.
  • You can upload early to let copyright and monetization checks finish before publishing.
  • Waiting hours, weeks, or months won’t yield different results.
OpenAI’s Sam Altman Says Personalized AI Raises Privacy Concerns via @sejournal, @martinibuster

In a recent interview with Stanford University, OpenAI’s CEO Sam Altman predicted that AI security will become the defining problem of the next phase of AI development, saying that AI security is one of the best fields to study right now. He also cited personalized AI as one example of a security concern that he’s been thinking about lately.

What Does AI Security Mean Today?

Sam Altman said that concerns about AI safety will be reframed as AI Security issues that can be solvable by AI.

Interview host, Dan Boneh, asked:

“So what does it mean for an AI system to be secure? What does it mean for even trying to kind of make it do things it wasn’t designed to do?

How do we protect AI systems from prompt injections and other attacks like that? How do you think of AI security?

I guess the concrete question I want to ask is, among all the different things we can do with AI, this course is about learning one sliver of the field. Is this a good area? Should people go into this?”

Sam Altman encouraged today’s students to study AI security.

He answered:

“I think this is one of the best areas to go study. I think we are soon heading into a world where a lot of the AI safety problems that people have traditionally talked about are going to be recast as AI security problems in different ways.

I also think that given how capable these models are getting, if we want to be able to deploy them for wide use, the security problems are going to get really big. You mentioned many areas that I think are super important to figure out. Adversary robustness in particular seems like it’s getting quite serious.”

What Altman means is that people are starting to find ways to trick AI systems, and the problem is becoming serious enough that researchers and engineers need to focus on making AI resistant to manipulation and other kinds of attacks, such as prompt injections.

AI Personalization Becoming A Security Concern

Altman also said that something he’s been thinking a lot about lately is possible security issues with AI personalization. He said that people appreciate personalized responses from AI but he said that this could open the door to malicious hackers figuring out how to steal sensitive data (exfiltrate).

He explained:

“One more that I will mention that you touched on a little bit, but just it’s been on my mind a lot recently. There are two things that people really love right now that taken together are a real security challenge.

Number one, people love how personalized these models are getting. So ChatGPT now really gets to know you. It personalizes over your conversational history, your data you’ve connected to it, whatever else.

And then number two is you can connect these models to other services. They can go off and like call things on the web and, you know, do stuff for you that’s helpful.

But what you really don’t want is someone to be able to exfiltrate data from your personal model that knows everything about you.

And humans, you can kind of trust to be reasonable at this. If you tell your spouse a bunch of secrets, you can sort of trust that they will know in what context what to tell to other people. The models don’t really do this very well yet.

And so if you’re telling like a model all about your, you know, private health care issues, and then it is off, and you have it like buying something for you, you don’t want that e-commerce site to know about all of your health issues or whatever.

But this is a very interesting security problem to solve this with like 100% robustness.”

Altman identifies personalization as both a breakthrough and a new opening for cyber attack. The same qualities that make AI more useful also make it a target, since models that learn from individual histories could be manipulated to reveal them. Altman shows how convenience can become a source of exposure, explaining that privacy and usability are now security challenges.

Lastly, Altman circled back to AI as both the security problem and the solution.

He concluded:

“Yeah, by the way, it works both directions. Like you can use it to secure systems. I think it’s going to be a big deal for cyber attacks at various times.”

Takeaways

  • AI Security As The Next Phase Of AI Development
    Altman predicts that AI security will replace AI safety as the central challenge and opportunity in artificial intelligence.
  • Personalization As A New Attack Surface
    The growing trend of AI systems that learn from user data raises new security concerns, since personalization could expose opportunities for attackers to extract sensitive information.
  • Dual Role Of AI In Cybersecurity
    Altman emphasizes that AI will both pose new security threats and serve as a powerful tool to detect and prevent them.
  • Emerging Need For AI Security Expertise
    Altman’s comments suggests that there will be a rising demand for professionals who understand how to secure, test, and deploy AI responsibly.
Is AI Search SEO Leaving Bigger Opportunities Behind? via @sejournal, @martinibuster

A recent podcast by Ahrefs raised two issues about optimizing for AI search that can cause organizations to underperform and miss out on opportunities to improve sales. The conversation illustrates a gap between realistic expectations for AI-based trends and what can be achieved through overlooked opportunities elsewhere.

YouTube Is Second Largest Search Engine

The first thing noted in the podcast is that YouTube is the second-largest search engine by queries entered in the search bar. More people type search queries into YouTube’s search bar than any other search engine except Google itself. So it absolutely makes sense for companies to seriously consider how a video strategy can work to increase traffic and brand awareness.

It should be a no-brainer that businesses figure out YouTube, and yet many businesses are rushing to spend time and money optimizing for answer engines like Perplexity and ChatGPT, which have a fraction of the traffic of YouTube.

Patrick Stox explained:

“YouTube is the second largest search engine. There’s a lot of focus on all these AI assistants. They’re in total driving less than 1% of your traffic. YouTube might be a lot more. I don’t know how much it’s going to drive traffic to the website, but there’s a lot of eyes on it. I know for us, like we see it in our signups, …they sign up for Ahrefs.

It’s an incredible channel that I think as people need to diversify, to kind of hedge their bets on where their traffic is coming from, this would be my first choice. Like go and do more video. There’s your action item. If you’re not doing it, go do more video right now.”

Tim Soulo, Ahrefs CMO, expressed curiosity that so many people are looking two or three years ahead for opportunities that may or may not materialize on AI assistants, while overlooking the real benefits available today on YouTube.

He commented:

“I feel that a lot of people get fixated on AI assistants like ChatGPT and Perplexity and optimizing for AI search because they are kind of looking three, five years ahead and they are kind of projecting that in three, five years, that might be the dominant thing, how people search.

…But again, if we focus on today, YouTube is much more popular than ChatGPT and YouTube has a lot more business potential than ChatGPT. So yeah, definitely you have to invest in AI search. You have to do the groundwork that would help you rank in Google, rank in ChatGPT and everything. …I don’t see YouTube losing its relevance five years from now. I can only see it getting bigger and bigger because the new generation of people that is growing up right now, they are very video oriented. Short form video, long form video. So yeah, definitely. If you’re putting all your eggs in the basket of ChatGPT, but not putting anything in YouTube, that’s a big mistake.”

Patrick Stox agreed with Tim, noting that Instagram and TikTok are big for short-form videos that are wildly popular today, and encouraged viewers and listeners to see how video can fit into their marketing.

Some of the disconnect regarding SEO and YouTube is that SEOs may feel that SEO is about Google, and YouTube is therefore not their domain of responsibility. I would counter that YouTube should be a part of SEOs’ concern because people use it for reviews, how-to information, and product research, and the searches on YouTube are second only to Google.

SEO/AEO/GEO Can’t Solve All AI Search Issues

The second topic they touched on was the expectations placed on SEO to solve all of a business’s traffic and visibility problems. Patrick Stox and Tim Soulo suggested that high rankings and a satisfactory marketing outcome begin and end with a high-quality product, service, and content. Problems at the product or service end cause friction and result in negative sentiment on social media. This isn’t something that you can SEO yourself out of.

Patrick Stox explained:

“We only have a certain amount of control, though. We can go and create a bunch of pages, a bunch of content. But if you have real issues, like if everyone suddenly is like Nvidia’s graphics cards suck and they’re saying that on social media and Reddit and everything, YouTube, there’s only so much you can do to combat that.

…And there might be tens of thousands of them and there’s one of me. So what am I gonna do? I’m gonna be a drop in the bucket. It’s gonna be noise in the void. The internet is still the one controlling the narrative. So there’s only so much that SEOs are gonna be able to do in a situation like that.

…So this is going to get contentious in a lot of organizations where you’re going to have to do something that the execs are going to be yelling, can’t you just change that, make it go away?”

Tim and Patrick went on to use the example of their experience with a pricing change they made a few years ago, where customers balked at the changes. Ahrefs made the change because they thought it would make their service more affordable, but despite their best efforts to answer user questions and get control of the conversation, the controversy wouldn’t go away, so they ultimately decided to give users what they wanted.

The point is that positive word of mouth isn’t necessarily an SEO issue, even though SEO/GEO/AEO is now expected to get out there and build positive brand associations so that they’re recommended by AI Mode, ChatGPT, and Perplexity.

Takeaways

  • Find balance between AI search and immediate business opportunities:
    Some organizations may focus too heavily on optimizing for AI assistants at the expense of video and multimodal search opportunities.
  • YouTube’s marketing power:
    YouTube is the second-largest search engine and a major opportunity for traffic and brand visibility.
  • Realistic expectations for SEO:
    SEO/GEO/AEO cannot fix problems rooted in poor products, services, or customer sentiment. Long-term visibility in AI search depends not just on optimization, but on maintaining positive brand sentiment.

Watch the video at about the 36 minute mark:

Featured Image by Shutterstock/Collagery

The Download: surviving extreme temperatures, and the big whale-wind turbine conspiracy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The quest to find out how our bodies react to extreme temperatures

Climate change is subjecting vulnerable people to temperatures that push their limits. In 2023, about 47,000 heat-related deaths are believed to have occurred in Europe. Researchers estimate that climate change could add an extra 2.3 million European heat deaths this century. That’s heightened the stakes for solving the mystery of just what happens to bodies in extreme conditions.

While we broadly know how people thermoregulate, the science of keeping warm or cool is mottled with blind spots. Researchers around the world are revising rules about when extremes veer from uncomfortable to deadly. Their findings change how we should think about the limits of hot and cold—and how to survive in a new world. Read the full story.

—Max G.Levy

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land.

Whales are dying. Don’t blame wind turbines.

Whale deaths have become a political flashpoint. There are currently three active mortality events for whales in the Atlantic, meaning clusters of deaths that experts consider unusual. And Republican lawmakers, conservative think tanks, and—most notably—President Donald Trump (a longtime enemy of wind power) are making dubious claims that offshore wind farms are responsible.

But any finger-pointing at wind turbines for whale deaths ignores the fact that whales have been washing up on beaches since long before the giant machines were rooted in the ocean floor. This is something that has always happened. And the scientific consensus is clear: There’s no evidence that wind farms are the cause of recent increases in whale deaths. Read the full story.

—Casey Crownhart

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here.

The State of AI: Energy is king, and the US is falling behind

In the age of AI, the biggest barrier to progress isn’t money but energy. That should be particularly worrying in the US, where massive data centers are waiting to come online. It doesn’t look as if the country will build the steady power supply or infrastructure needed to serve them all.

It wasn’t always like this. For about a decade before 2020, data centers were able to offset increased demand with efficiency improvements. Now, though, electricity demand is ticking up in the US, with billions of queries to popular AI models each day—and efficiency gains aren’t keeping pace.

If we want AI to have the chance to deliver on big promises without driving electricity prices sky-high for the rest of us, the US needs to learn some lessons from the rest of the world on energy abundance. Just look at China. Read the full story.

—Casey Crownhart & Pilita Clark

This is from The State of AI, our subscriber-only collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power.

Every Monday for the next four weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the magazine are able to read the whole thing. Sign up here to receive future editions every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How China narrowed its AI divide with the US
America still has a clear lead—but for how long? (WSJ $)
+ The AI boom won’t offset tariffs and America’s immigration crackdown forever. (FT $)
+ How quickly is AI likely to progress really? (Economist $)
+ Is China about to win the AI race? (MIT Technology Review)

2 Anthropic is due to turn a profit much faster than OpenAI
The two companies are taking very different approaches to making money. (WSJ $)
+ OpenAI has lured Intel’s AI chief away. (Bloomberg $)

3 The EU is setting up a new intelligence sharing unit
It’s a bid to shore up intel in the wake of Donald Trump’s plans to reduce security support for Europe. (FT $)

4 Trump officials are poised to suggest oil drilling off the coast of California
That’s likely to rile the state’s politicians and leaders. (WP $)
+ What role should oil and gas companies play in climate tech? (MIT Technology Review)

5 America’s cyber defenses are poor
Repeated cuts and mass layoffs are making it harder to protect the nation. (The Verge)

6 China is on track to hit its peak CO2 emissions target early
Although it’s likely to miss its goal for cutting carbon intensity. (The Guardian)
+ World leaders are heading to COP30 in Brazil this week. (New Yorker $)

7 OpenAI cannot use song lyrics without a license
That’s what a German court has decided, after siding with a music rights society. (Reuters)
+ OpenAI is no stranger to legal proceedings. (The Atlantic $)
+ AI is coming for music. (MIT Technology Review)

8 A small Michigan town is fighting a proposed AI data center
The planned center is part of a collaboration between the University of Michigan and nuclear weapons scientists. (404 Media)
+ Here’s where America’s data centers should be built instead. (Wired $)
+ Communities in Latin America are pushing back, too. (The Guardian)
+ Should we be moving data centers to space? (MIT Technology Review)

9 AI models can’t tell the time ⏰
Analog clocks leave them completely stumped. (IEEE Spectrum)

10 ChatGPT is giving daters the ick
These refuseniks don’t want anything to do with AI, or love interests who use it. (The Guardian)

Quote of the day

“I never imagined that making a cup of tea or obtaining water, antibiotics, or painkillers would require such tremendous effort.”

—An anonymous member of startup accelerator Gaza Sky Geeks tells Rest of World about the impact the war has had on them.

One more thing

How Rust went from a side project to the world’s most-loved programming language

Many software projects emerge because—somewhere out there—a programmer had a personal problem to solve.

That’s more or less what happened to Graydon Hoare. In 2006, Hoare was a 29-year-old computer programmer working for Mozilla. After a software crash broke the elevator in his building, he set about designing a new computer language; one that he hoped would make it possible to write small, fast code without memory bugs.

That language developed into Rust, one of the hottest new languages on the planet. But while it isn’t unusual for someone to make a new computer language, it’s incredibly rare for one to take hold and become part of the programming pantheon. How did Rust do it? Read the full story

—Clive Thompson

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Having a bit of a rubbish day so far? Here’s how to make it better.
+ A Hungarian man played Dance Dance Revolution for 144 hours non-stop, because he knows how to have a seriously good time.
+ A new book is celebrating cats, as it should (thanks Jess!)
+ How a poem from a medieval trickster sowed the seed for hundreds of years of bubonic plague misinformation 🐀

Digital Goods Stay Duty-Free

Unlike physical goods, software and digital downloads have long been exempt from duties and tariffs. Many nations, including the United States, want to keep it that way.

In October, the Trump administration secured new trade deals with Malaysia, Cambodia, and Thailand that included a call for a permanent end to tariffs and taxes on digital goods and online services.

The agreements reaffirm the United States’ long-standing position that digital commerce — software, streaming, cloud storage, and similar services — should flow freely across borders, untaxed and unrestricted.

While this policy benefits large technology companies like Amazon, Meta, and Google, it also supports thousands of small and mid-sized ecommerce firms that sell digital products.

Digital Trade Policy

The World Trade Organization’s moratorium on customs duties on electronic transmissions is at the heart of modern digital trade.

First adopted in 1998, the moratorium restricts member countries from charging customs duties on digital goods and services delivered over the internet. WTO members have extended it multiple times. It will expire in March 2026.

Photo of Donald Trump and Anwar Ibrahim shaking hands

President Donald Trump with Malaysian Prime Minister Anwar Ibrahim. Source: The White House.

The Trump administration wants to make the moratorium permanent. In the new trade pacts, the U.S. secured commitments from Malaysia, Cambodia, and Thailand not to impose digital services taxes or discriminate against American providers. The agreement with Malaysia includes language for permanent extension of the WTO moratorium.

The U.S. is not alone. On November 6, 2025, a coalition of nations from Africa, the Caribbean, and the Pacific proposed yet another extension of the same agreement. Support for tariff-free digital trade has become a rare point of consensus in an otherwise divided global economy.

Digital Trade and Ecommerce

The moratorium is important for ecommerce businesses, as it affects the software they use and their digital products.

Without the tariff ban, duties could apply to:

  • Software-as-a-service tools like BigCommerce or Adobe Creative Cloud,
  • Digital downloads such as ebooks, sheet music, or stock photos,
  • Streaming and cloud platforms like Netflix and Amazon Web Services.

The WTO rule has created an open digital marketplace, allowing small firms to reach global customers with minimal friction. Keeping it in place preserves the status quo.

If the moratorium were to lapse, governments could begin taxing digital transactions, such as a subscription to Shopify (Canada) or PrestaShop (France). Digital products such as online courses or downloadable sheet music might also face tariffs.

Physical Tariffs

While it continues to promote duty-free digital trade, the United States has recently closed a gaping tax loophole in physical commerce: the de minimis tariff exemption.

That rule, which allowed low-value imports into the United States duty-free, expired at the end of August 2025. Since then, some orders entering the country have been subject to customs duties and inspection.

For select U.S. retailers, the change has been welcome. These companies argue that the exemption gave foreign sellers an unfair edge by letting them ship cheap goods directly to U.S. consumers without paying import taxes or local fees.

China, for example, leveraged its controlled economy and demand for hard currency to gain a further ecommerce advantage — again, putting American retailers at a disadvantage.

A similar debate is emerging in Latin America. Speaking at a conference in Buenos Aires in November 2025, Juan Martín de la Serna, head of Mercado Libre Argentina, called for tighter regulations, e.g., tariffs, on low-cost Chinese goods.

“The influx of cheap, low-quality imports from China risks undermining small and medium-sized businesses,” de la Serna said.

Mexico, Chile, and Uruguay have already tightened import and tax rules on those shipments. These arguments echo the U.S. position.

Controlled Advantage

China’s hybrid economy is a blend of free markets and state control. While it has many drawbacks, the arrangement gives its exporters advantages in both manufacturing and ecommerce.

The Chinese Communist Party can direct credit, manage currency values, and subsidize key industries, from logistics to artificial intelligence.

These levers help Chinese sellers offer products at prices that independent retailers elsewhere find difficult to match.

Local governments often provide tax rebates and cheap loans to manufacturers that sell through cross-border platforms, while the central government encourages ecommerce exports as a source of hard currency.

This state-supported structure aids China’s ecommerce giants.

2 Policies

Many of the same imbalances that define the global trade of physical goods have surfaced in digital products.

Controlled economies can dominate digital markets by shaping access to data, funding domestic tech giants, and restricting foreign competition. In China, for example, strict data-localization rules keep Western platforms out while homegrown firms expand abroad.

Low wages and intense labor conditions extend beyond factories into the digital workforce, from content moderation and customer support to the new wave of generative AI data labeling.

Even artificial intelligence itself reflects global inequality. Chinese models trained on Western data are increasingly deployed by state-backed enterprises, reinforcing competitive differences.

Nonetheless, worldwide business is complex, so in a sense, the differences in U.S. policies toward physical and digital goods could serve as a policy test. Which philosophy, protectionism or unfettered access, proves the most successful to business and, importantly, to consumers?