Google Search Advocate John Mueller says large video files loading in the background are unlikely to have a noticeable SEO impact if page content loads first.
A site owner on Reddit’s r/SEO asked whether a 100MB video would hurt SEO if the page prioritizes loading a hero image and content before the video. The video continues loading in the background while users can already see the page.
Mueller responded:
“I don’t think you’d notice an SEO effect.”
Broader Context
The question addresses a common concern for sites using large hero videos or animated backgrounds.
The site owner described an implementation where content and images load within seconds, displaying a “full visual ready” state. The video then loads asynchronously and replaces the hero image once complete.
This method aligns with Google’s documentation on lazy loading, which recommends deferring non-critical content to improve page performance.
Google’s help documents state that lazy loading is “a common performance and UX best practice” for non-critical or non-visible content. The key requirement is ensuring content loads when visible in the viewport.
Why This Matters
If you’re running hero videos or animated backgrounds on landing pages, this suggests that background loading strategies are unlikely to harm your rankings. The critical factor is ensuring your primary content reaches users quickly.
Google measures page experience through Core Web Vitals metrics like Largest Contentful Paint. In many cases, a video that loads after visible content is ready shouldn’t block these measurements.
Implementation Best Practices
Google’s web.dev documentation recommends using preload=”none” on video elements to avoid unnecessary preloading of video data. Adding a poster attribute provides a placeholder image while the video loads.
For videos that autoplay, the documentation suggests using the Intersection Observer API to load video sources only when the element enters the viewport. This lets you maintain visual impact without affecting initial page load performance.
Looking Ahead
Site owners using background video can generally continue doing so without major SEO concerns, provided content loads first. Focus on Core Web Vitals metrics to verify your implementation meets performance thresholds.
Test your setup using Google Search Console’s URL Inspection Tool to confirm video elements appear correctly in rendered HTML.
SE Ranking analyzed 129,000 unique domains across 216,524 pages in 20 niches to identify which factors correlate with ChatGPT citations.
The number of referring domains ranked as the single strongest predictor of citation likelihood.
What The Data Says
Backlinks And Trust Signals
Link diversity showed the clearest correlation with citations. Sites with up to 2,500 referring domains averaged 1.6 to 1.8 citations. Those with over 350,000 referring domains averaged 8.4 citations.
The researchers identified a threshold effect at 32,000 referring domains. At that point, citations nearly doubled from 2.9 to 5.6.
Domain Trust scores followed a similar pattern. Sites with Domain Trust below 43 averaged 1.6 citations. The benefits accelerated significantly at the top end: sites scoring 91–96 averaged 6 citations, while those scoring 97–100 averaged 8.4.
Page Trust mattered less than domain-level signals. Any page with a Page Trust score of 28 or above received roughly the same citation rate (8.3 average), suggesting ChatGPT weighs overall domain authority more heavily than individual page metrics .
One notable finding: .gov and .edu domains didn’t automatically outperform commercial sites. Government and educational domains averaged 3.2 citations, compared to 4.0 for sites without trusted zone designations.
The authors wrote:
“What ultimately matters is not the domain name itself, but the quality of the content and the value it provides.”
Traffic & Google Rankings
Domain traffic ranked as the second most important factor, though the correlation only appeared at high traffic levels.
Sites under 190,000 monthly visitors averaged 2 to 2.9 citations regardless of exact traffic volume. A site receiving 20 organic visitors performed similarly to one receiving 20,000.
Only after crossing 190,000 monthly visitors did traffic correlate with increased citations. Domains with over 10 million visitors averaged 8.5 citations.
Homepage traffic specifically mattered. Sites with at least 7,900 organic visitors to their main page showed the highest citation rates.
Average Google ranking position also tracked with ChatGPT citations. Pages ranking between positions 1 and 45 averaged 5 citations. Those ranking 64 to 75 averaged 3.1.
The authors noted:
“While this doesn’t prove that ChatGPT relies on Google’s index, it suggests both systems evaluate authority and content quality similarly.”
Content Depth & Structure
Content length showed consistent correlation. Articles under 800 words averaged 3.2 citations. Those over 2,900 words averaged 5.1.
Structure mattered beyond raw word count. Pages with section lengths of 120 to 180 words between headings performed best, averaging 4.6 citations. Extremely short sections under 50 words averaged 2.7 citations.
Pages with expert quotes averaged 4.1 citations versus 2.4 for those without. Content with 19 or more statistical data points averaged 5.4 citations, compared to 2.8 for pages with minimal data.
Content freshness produced one of the clearer findings. Pages updated within three months averaged 6 citations. Outdated content averaged 3.6.
Surprisingly, the raw data showed that pages with FAQ sections actually received fewer citations (3.8) than those without (4.1). However, the researchers noted that their predictive model viewed the absence of an FAQ section as a negative signal. They suggest this discrepancy exists because FAQs often appear on simpler support pages that naturally earn fewer citations.
The report also found that using question-style headings (e.g., as H1s or H2s) underperformed straightforward headings, earning 3.4 citations versus 4.3. This contradicts standard voice search optimization advice, suggesting AI models may prefer direct topical labeling over question formats.
Social Signals & Review Platforms
Brand mentions on discussion platforms showed strong correlation with citations.
Domains with minimal Quora presence (up to 33 mentions) averaged 1.7 citations. Heavy Quora presence (6.6 million mentions) corresponded to 7.0 citations.
Reddit showed similar patterns. Domains with over 10 million mentions averaged 7 citations, compared to 1.8 for those with minimal activity.
The authors positioned this as particularly relevant for smaller sites:
“For smaller, less-established websites, engaging on Quora and Reddit offers a way to build authority and earn trust from ChatGPT, similar to what larger domains achieve through backlinks and high traffic.”
Presence on review platforms like Trustpilot, G2, Capterra, Sitejabber, and Yelp also correlated with increased citations. Domains listed on multiple review platforms earned 4.6 to 6.3 citations on average. Those absent from such platforms averaged 1.8.
Technical Performance
Page speed metrics correlated with citation likelihood.
Pages with First Contentful Paint under 0.4 seconds averaged 6.7 citations. Slower pages (over 1.13 seconds) averaged 2.1.
Speed Index showed similar patterns. Sites with indices below 1.14 seconds performed reliably well. Those above 2.2 seconds experienced steep decline.
One counterintuitive finding: pages with the fastest Interaction to Next Paint scores (under 0.4 seconds) actually received fewer citations (1.6 average) than those with moderate INP scores (0.8 to 1.0 seconds, averaging 4.5 citations). The researchers suggested extremely simple or static pages may not signal the depth ChatGPT looks for in authoritative sources.
URL & Title Optimization
The report found that broad, topic-describing URLs outperformed keyword-optimized ones.
Pages with low semantic relevance between URL and target keyword (0.00 to 0.57 range) averaged 6.4 citations. Those with highest semantic relevance (0.84 to 1.00) averaged only 2.7 citations.
Titles followed the same pattern. Titles with low keyword matching averaged 5.9 citations. Highly keyword-optimized titles averaged 2.8.
The researchers concluded: “ChatGPT prefers URLs that clearly describe the overall topic rather than those strictly optimized for a single keyword.”
Factors That Underperformed
Several commonly recommended AI optimization tactics showed minimal or negative correlation with citations.
FAQ schema markup underperformed. Pages with FAQ schema averaged 3.6 citations. Pages without averaged 4.2.
LLMs.txt files showed negligible impact. Outbound links to high-authority sites also showed minimal effect on citation likelihood.
Why This Matters
The findings suggest your existing SEO strategy may already serve AI visibility goals. If you’re building referring domains, earning traffic, maintaining fast pages, and keeping content updated, you’re addressing the factors this report identified as most predictive.
For smaller sites without extensive backlink profiles, the research points to community engagement on Reddit and Quora as a viable path to building authority signals The data also suggests focusing on content depth over keyword density.
The researchers note that factors are interdependent. Optimizing one signal while ignoring others reduces overall effectiveness.
Looking Ahead
SE Ranking analyzed ChatGPT specifically. Other AI systems may weight factors differently.
SE Ranking doesn’t specify which ChatGPT version or timeframe the data represents, so these patterns should be treated as directional correlations rather than proof of how ChatGPT’s ranking algorithm works.
AI search isn’t just changing what content ranks; it’s quietly redrawing where your brand appears to belong. As large language models (LLMs) synthesize results across languages and markets, they blur the boundaries that once kept content localized. Traditional geographic signals of hreflang, ccTLDs, and regional schema are being bypassed, misread, or overwritten by global defaults. The result: your English site becomes the “truth” for all markets, while your local teams wonder why their traffic and conversions are vanishing.
This article focuses primarily on search-grounded AI systems such as Google’s AI Overviews and Bing’s generative search, where the problem of geo-identification drift is most visible. Purely conversational AI may behave differently, but the core issue remains: when authority signals and training data skew global and geographic context, synthesis often loses that context.
The New Geography Of Search
In classic search, location was explicit:
IP, language, and market-specific domains dictated what users saw.
Hreflang told Google which market variant to serve.
Local content lived on distinct ccTLDs or subdirectories, supported by region-specific backlinks and metadata.
AI search breaks this deterministic system.
In a recent article on “AI Translation Gaps,” International SEO Blas Giffuni demonstrated this problem when he typed the phrase “proveedores de químicos industriales.” Rather than presenting the local market website with a list of industrial chemical suppliers in Mexico, it presented a translated list from the US, of which some either did not do business in Mexico or did not meet local safety or business requirements. A generative engine doesn’t just retrieve documents; it synthesizes an answer using whatever language or source it finds most complete.
If your local pages are thin, inconsistently marked up, or overshadowed by global English content, the model will simply pull from the worldwide corpus and rewrite the answer in Spanish or French.
On the surface, it looks localized. Underneath, it’s English data wearing a different flag.
Why Geo-Identification Is Breaking
1. Language ≠ Location
AI systems treat language as a proxy for geography. A Spanish query could represent Mexico, Colombia, or Spain. If your signals don’t specify which markets you serve through schema, hreflang, and local citations, the model lumps them together.
When that happens, your strongest instance wins. And nine times out of 10, that’s your main English language website.
2. Market Aggregation Bias
During training, LLMs learn from corpus distributions that heavily favor English content. When related entities appear across markets (‘GlobalChem Mexico,’ ‘GlobalChem Japan’), the model’s representations are dominated by whichever instance has the most training examples, typically the English global brand. This creates an authority imbalance that persists during inference, causing the model to default to global content even for market-specific queries.
3. Canonical Amplification
Search engines naturally try to consolidate near-identical pages, and hreflang exists to counter that bias by telling them that similar versions are valid alternatives for different markets. When AI systems retrieve from these consolidated indexes, they inherit this hierarchy, treating the canonical version as the primary source of truth. Without explicit geographic signals in the content itself, regional pages become invisible to the synthesis layer, even when they are adequately tagged with hreflang.
This amplifies market-aggregation bias; your regional pages aren’t just overshadowed, they’re conceptually absorbed into the parent entity.
Will This Problem Self-Correct?
As LLMs incorporate more diverse training data, some geographic imbalances may diminish. However, structural issues like canonical consolidation and the network effects of English-language authority will persist. Even with perfect training data distribution, your brand’s internal hierarchy and content depth differences across markets will continue to influence which version dominates in synthesis.
The Ripple Effect On Local Search
Global Answers, Local Users
Procurement teams in Mexico or Japan receive AI-generated answers derived from English pages. The contact info, certifications, and shipping policies are wrong, even if localized pages exist.
Local Authority, Global Overshadowing
Even strong local competitors are being displaced because models weigh the English/global corpus more heavily. The result: the local authority doesn’t register.
Brand Trust Erosion
Users perceive this as neglect:
“They don’t serve our market.” “Their information isn’t relevant here.”
In regulated or B2B industries where compliance, units, and standards matter, this results in lost revenue and reputational risk.
Hreflang In The Age of AI
Hreflang was a precision instrument in a rules-based world. It told Google which page to serve in which market. But AI engines don’t “serve pages” – they generate responses.
That means:
Hreflang becomes advisory, not authoritative.
Current evidence suggests LLMs don’t actively interpret hreflang during synthesis because it doesn’t apply to the document-level relationships they use for reasoning.
If your canonical structure points to global pages, the model inherits that hierarchy, not your hreflang instructions.
In short, hreflang still helps Google indexing, but it no longer governs interpretation.
AI systems learn from patterns of connectivity, authority, and relevance. If your global content has richer interlinking, higher engagement, and more external citations, it will always dominate the synthesis layer – regardless of hreflang.
Let’s look at a real-world pattern observed across markets:
Weak local content (thin copy, missing schema, outdated catalog).
Global canonical consolidates authority under .com.
AI overview or chatbot pulls the English page as source data.
The model generates a response in the user’s language, drawing on facts and context from the English source while adding a few local brand names to create the appearance of localization, and then serves a synthetic local-language answer.
User clicks through to a U.S. contact form, gets blocked by shipping restrictions, and leaves frustrated.
Each of these steps seems minor, but together they create a digital sovereignty problem – global data has overwritten your local market’s representation.
Geo-Legibility: The New SEO Imperative
In the era of generative search, the challenge isn’t just to rank in each market – it’s to make your presence geo-legible to machines.
Geo-legibility builds on international SEO fundamentals but addresses a new challenge: making geographic boundaries interpretable during AI synthesis, not just during traditional retrieval and ranking. While hreflang tells Google which page to index for which market, geo-legibility ensures the content itself contains explicit, machine-readable signals that survive the transition from structured index to generative response.
That means encoding geography, compliance, and market boundaries in ways LLMs can understand during both indexing and synthesis.
Key Layers Of Geo-Legibility
Layer
Example Action
Why It Matters
Content
Include explicit market context (e.g., “Distribuimos en México bajo norma NOM-018-STPS”)
Reinforces relevance to a defined geography.
Structure
Use schema for areaServed, priceCurrency, and addressLocality
Provides explicit geographic context that may influence retrieval systems and helps future-proof as AI systems evolve to better understand structured data.
Links & Mentions
Secure backlinks from local directories and trade associations
Builds local authority and entity clustering.
Data Consistency
Align address, phone, and organization names across all sources
Prevents entity merging and confusion.
Governance
Monitor AI outputs for misattribution or cross-market drift
Detects early leakage before it becomes entrenched.
Note: While current evidence for schema’s direct impact on AI synthesis is limited, these properties strengthen traditional search signals and position content for future AI systems that may parse structured data more systematically.
Geo-legibility isn’t about speaking the right language; it’s about being understood in the right place.
Diagnostic Workflow: “Where Did My Market Go?”
Run Local Queries in AI Overview or Chat Search. Test your core product and category terms in the local language and record which language, domain, and market each result reflects.
Capture Cited URLs and Market Indicators. If you see English pages cited for non-English queries, that’s a signal your local content lacks authority or visibility.
Cross-Check Search Console Coverage. Confirm that your local URLs are indexed, discoverable, and mapped correctly through hreflang.
Inspect Canonical Hierarchies. Ensure your regional URLs aren’t canonicalized to global pages. AI systems often treat canonical as “primary truth.”
Test Structured Geography. For Google and Bing, be sure to add or validate schema properties like areaServed, address, and priceCurrency to help engines map jurisdictional relevance.
Repeat Quarterly. AI search evolves rapidly. Regular testing ensures your geo boundaries remain stable as models retrain.
Remediation Workflow: From Drift To Differentiation
Step
Focus
Impact
1
Strengthen local data signals (structured geography, certification markup).
Clarifies market authority
2
Build localized case studies, regulatory references, and testimonials.
Anchors E-E-A-T locally
3
Optimize internal linking from regional subdomains to local entities.
Reinforces market identity
4
Secure regional backlinks from industry bodies.
Adds non-linguistic trust
5
Adjust canonical logic to favor local markets.
Prevents AI inheritance of global defaults
6
Conduct “AI visibility audits” alongside traditional SEO reports.
Beyond Hreflang: A New Model Of Market Governance
Executives need to see this for what it is: not an SEO bug, but a strategic governance gap.
AI search collapses boundaries between brand, market, and language. Without deliberate reinforcement, your local entities become shadows inside global knowledge graphs.
That loss of differentiation affects:
Revenue: You become invisible in the markets where growth depends on discoverability.
Compliance: Users act on information intended for another jurisdiction.
Equity: Your local authority and link capital are absorbed by the global brand, distorting measurement and accountability.
Why Executives Must Pay Attention
The implications of AI-driven geo drift extend far beyond marketing. When your brand’s digital footprint no longer aligns with its operational reality, it creates measurable business risk. A misrouted customer in the wrong market isn’t just a lost lead; it’s a symptom of organizational misalignment between marketing, IT, compliance, and regional leadership.
Executives must ensure their digital infrastructure reflects how the company actually operates, which markets it serves, which standards it adheres to, and which entities own accountability for performance. Aligning these systems is not optional; it’s the only way to minimize negative impact as AI platforms redefine how brands are recognized, attributed, and trusted globally.
Executive Imperatives
Reevaluate Canonical Strategy. What once improved efficiency may now reduce market visibility. Treat canonicals as control levers, not conveniences.
Expand SEO Governance to AI Search Governance. Traditional hreflang audits must evolve into cross-market AI visibility reviews that track how generative engines interpret your entity graph.
Reinvest in Local Authority. Encourage regional teams to create content with market-first intent – not translated copies of global pages.
Measure Visibility Differently. Rankings alone no longer indicate presence: track citations, sources, and language of origin in AI search outputs.
Final Thought
AI didn’t make geography irrelevant; it just exposed how fragile our digital maps were.
Hreflang, ccTLDs, and translation workflows gave companies the illusion of control.
AI search removed the guardrails, and now the strongest signals win – regardless of borders.
The next evolution of international SEO isn’t about tagging and translating more pages. It’s about governing your digital borders and making sure every market you serve remains visible, distinct, and correctly represented in the age of synthesis.
Because when AI redraws the map, the brands that stay findable aren’t the ones that translate best; they’re the ones who define where they belong.
See Where Your Brand Stands in the New Search Frontier
AI search has become the new gateway to visibility. As Google’s AI Overviews and Answer Engine Optimization (AEO) reshape discovery, the question is no longer if your brand should adapt, but how fast.
Join Pat Reinhart, VP of Services and Thought Leadership at Conductor, and Shannon Vize, Sr. Content Marketing Manager at Conductor, for an exclusive first look at the 2026 AEO and GEO Benchmarks Report, the industry’s most comprehensive study of AI search performance across 10 key industries.
What You’ll Learn
The exclusive 2026 benchmarks for AI referral traffic, AIO visibility, and AEO/GEO performance across industries
How to identify where your brand stands against AI market share leaders
How AI search and AIO are transforming visibility and referral traffic
Why Attend?
This is your opportunity to see what top-performing brands are doing differently and how to measure your own visibility, referral traffic, and share of voice in AI search. You’ll gain data-backed insights to update your SEO and AEO strategy for 2026 and beyond.
📌 Register now to secure your seat and benchmark your brand’s performance in the new era of AI search.
🛑 Can’t make it live? Register anyway and we’ll send you the full recording after the event.
For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.
In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.
Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate
In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from game-playing AI to a secret project to predict the structures of proteins. He applied for a job.
Just three years later, Jumper and CEO Demis Hassabis had led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching lab-level accuracy, and doing it many times faster—returning results in hours instead of months.
Last year, Jumper and Hassabis shared a Nobel Prize in chemistry. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out. Read the full story.
—Will Douglas Heaven
The State of AI: Chatbot companions and the future of our privacy
—Eileen Guo & Melissa Heikkilä
Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up.
Some state governments are taking notice and starting to regulate companion AI. But tellingly, one area the laws fail to address is user privacy. Read the full story.
This is the fourth edition of The State of AI, our subscriber-only collaboration between the Financial Times and MIT Technology Review. Sign up here to receive future editions every Monday.
While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing on our site.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Donald Trump has signed an executive order to boost AI innovation The “Genesis Mission” will try to speed up the rate of scientific breakthroughs. (Politico) + The order directs government science agencies to aggressively embrace AI. (Axios) + It’s also being touted as a way to lower energy prices. (CNN)
2 Anthropic’s new AI model is designed to be better at coding We’ll discover just how much better once Claude Opus 4.5 has been properly put through its paces. (Bloomberg $) + It reportedly outscored human candidates in an internal engineering test. (VentureBeat) + What is vibe coding, exactly? (MIT Technology Review)
3 The AI boom is keeping India hooked on coal Leaving little chance of cleaning up Mumbai’s famously deadly pollution. (The Guardian) + It’s lethal smog season in New Delhi right now. (CNN) + The data center boom in the desert. (MIT Technology Review)
4 Teenagers are losing access to their AI companions Character.AI is limiting the amount of time underage users can spend interacting with its chatbots. (WSJ $) + The majority of the company’s users are young and female. (CNBC) + One of OpenAI’s key safety leaders is leaving the company. (Wired $) + The looming crackdown on AI companionship. (MIT Technology Review)
5 Weight-loss drugs may be riskier during pregnancy Recipients are more likely to deliver babies prematurely. (WP $) + The pill version of Ozempic failed to halt Alzheimer’s progression in a trial. (The Guardian) + We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)
6 OpenAI is launching a new “shopping research” tool All the better to track your consumer spending with. (CNBC) + It’s designed for price comparisons and compiling buyer’s guides. (The Information $) + The company is clearly aiming for a share of Amazon’s e-commerce pie. (Semafor)
7 LA residents displaced by wildfires are moving into prefab housing Their new homes are cheap to build and simple to install. (Fast Company $) + How AI can help spot wildfires. (MIT Technology Review)
8 Why former Uber drivers are undertaking the world’s toughest driving test They’re taking the Knowledge—London’s gruelling street test that bypasses GPS. (NYT $)
9 How to spot a fake battery Great, one more thing to worry about. (IEEE Spectrum)
10 Where is the Trump Mobile? Almost six months after it was announced, there’s no sign of it. (CNBC)
Quote of the day
“AI is a tsunami that is gonna wipe out everyone. So I’m handing out surfboards.”
—Filmmaker PJ Accetturo, tells Ars Technica why he’s writing a newsletter advising fellow creatives how to pivot to AI tools.
One more thing
The second wave of AI coding is here
Ask people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.
Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. This next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it.
But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story.
—Will Douglas Heaven
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ If you’re planning a visit to Istanbul here’s hoping you like cats—the city can’t get enough of them. + Rest in power reggae icon Jimmy Cliff. + Did you know the ancient Egyptians had a pretty accurate way of testing for pregnancy? + As our readers in the US start prepping for Thanksgiving, spare a thought for Astoria the lovelorn turkey
Crunchbase reported in November that ecommerce start-up funding was about to hit a five-year low as investors focus on AI, logistics, marketplace, and live and social shopping.
The ecommerce industry is maturing. Per Crunchbase, total 2025 ecommerce start-up funding in the United States will reach $2.73 billion. That’s down from $3.06 billion last year and $28.05 billion in the pandemic-fueled 2021.
The global market is similar. Worldwide ecommerce investments peaked in 2021 at $92.46 billion, dropping to $7.72 billion this year.
Ecommerce Funding 2020 to 2025
Year
United States
Global
2020
$10 billion
$31.19 billion
2021
$28.05 billion
$92.46 billion
2022
$9.98 billion
$36.06 billion
2023
$2.87 billion
$16.54 billion
2024
$3.06 billion
$10.61 billion
2025
$2.73 billion
$7.27 billion
Seed through growth rounds of $200,000 or more. Source: Crunchbase.
Investment in Focus
Still, the ecommerce industry continues to grow. Several sources, including the National Retail Federation, have estimated that U.S. ecommerce sales will increase by 7% to 9% year over year by the end of 2025, roughly double that of physical retail.
So why are start-up ecommerce investments down if the industry is growing? The answer is that investors are not abandoning ecommerce. Rather, they are concentrating on areas they believe will define the next phase.
Certainly that’s true for retail enterprises and platforms. Amazon, Walmart, Shopify, PayPal, Target, and prominent brands have announced AI partnerships. These deals are indicative of where enterprise retail is going.
In reviewing this year’s start-up investments and deals among large retailers and commerce platforms, I see five areas of interest that indicate what’s next for ecommerce.
AI shopping,
AI commerce infrastructure,
Rapid logistics and fulfillment,
Marketplaces,
Live and social commerce.
AI Shopping
AI product search, AI-assisted shopping, and AI-powered agentic commerce are the hottest topics in the industry. Seemingly every major ecommerce retailer, marketplace, and platform is rushing to various degrees of AI-guided ecommerce.
AI shopping tools aim to reduce friction, match products to intent, and increase conversions. The tools will offer shoppers fewer but hopefully more relevant options.
For small-to-medium ecommerce businesses, AI integration will depend on platform adoption. As Shopify, WooCommerce, and BigCommerce integrate AI-guided shopping tools, even small merchants can offer conversational search, personalized recommendations, and similar.
AI shopping assistants may become as standard as site search, shifting how shoppers interact with independent ecommerce stores individually and collectively.
AI Commerce Infrastructure
Another cluster of investments focuses on the underlying infrastructure that powers ecommerce. These include product feeds, merchandising, ad creation, and operations.
GrowthList reported that ShopVision Technologies and Beyond the Checkout each raised funds to automate analytics, product catalog management, and post-purchase workflows.
The industry should therefore expect new software tools and platforms that do the work, such as creating an ad campaign or analyzing sales trends.
Rapid Logistics and Fulfillment
Logistics continues to attract investments as well.
India-based Zepto raised $450 million to expand its fast-delivery grocery network. Wonder, an American food and household delivery service, secured roughly $600 million earlier in 2025, according to Crunchbase.
Other logistics investments included Coco, a last-mile delivery provider that raised $60 million, and Stord, a distributed fulfillment network that raised $80 million.
Getting an ecommerce order from the warehouse to the customer has always been a significant challenge. Amazon and others have mastered it with same-day delivery, yet more speed and efficiency are needed.
Marketplaces
Investors are funding ecommerce marketplaces and related automation tools.
Refurbed, the European recommerce marketplace, raised more than $60 million to scale its operations, while emerging marketplaces in the Middle East, Asia, and Latin America continue to attract capital.
Meanwhile, Amazon, Walmart, Target Plus, TikTok Shop, and Temu have all expanded API access and seller tools, signaling increased competition.
Hence ecommerce sales and distribution might continue to become more decentralized. Independent ecommerce merchants may need to participate in several marketplaces.
Live and Social Commerce
Finally, livestream commerce continues to attract investors. Whatnot, the live shopping marketplace, raised $225 million, according to Crunchbase, and reported more than $6 billion in sales in 2025.
The trend may be a move away from static product pages to interactive, personality-driven sales channels. Live commerce enables shoppers to ask questions and see products in use.
This human interaction could facilitate trust more quickly. It might also be a counterweight of sorts to agentic commerce, where all of the trust is with the AI.
Most retailers will likely host livestreams via platforms and integrations — larger sellers more frequently than smaller ones.
Doc Brown’s DeLorean didn’t just travel through time; it created different timelines. Same car, different realities. In “Back to the Future,” when Marty’s actions in the past threatened his existence, his photograph began to flicker between realities depending on choices made across timelines.
This exact phenomenon is happening to your brand right now in AI systems.
ChatGPT on Monday isn’t the same as ChatGPT on Wednesday. Each conversation creates a new timeline with different context, different memory states, different probability distributions. Your brand’s presence in AI answers can fade or strengthen like Marty’s photograph, depending on context ripples you can’t see or control. This fragmentation happens thousands of times daily as users interact with AI assistants that reset, forget, or remember selectively.
The challenge: How do you maintain brand consistency when the channel itself has temporal discontinuities?
The Three Sources Of Inconsistency
The variance isn’t random. It stems from three technical factors:
Probabilistic Generation
Large language models don’t retrieve information; they predict it token by token using probability distributions. Think of it like autocomplete on your phone, but vastly more sophisticated. AI systems use a “temperature” setting that controls how adventurous they are when picking the next word. At temperature 0, the AI always picks the most probable choice, producing consistent but sometimes rigid answers. At higher temperatures (most consumer AI uses 0.7 to 1.0 as defaults), the AI samples from a broader range of possibilities, introducing natural variation in responses.
The same question asked twice can yield measurably different answers. Research shows that even with supposedly deterministic settings, LLMs display output variance across identical inputs, and studies reveal distinct effects of temperature on model performance, with outputs becoming increasingly varied at moderate-to-high settings. This isn’t a bug; it’s fundamental to how these systems work.
Context Dependence
Traditional search isn’t conversational. You perform sequential queries, but each one is evaluated independently. Even with personalization, you’re not having a dialogue with an algorithm.
AI conversations are fundamentally different. The entire conversation thread becomes direct input to each response. Ask about “family hotels in Italy” after discussing “budget travel” versus “luxury experiences,” and the AI generates completely different answers because previous messages literally shape what gets generated. But this creates a compounding problem: the deeper the conversation, the more context accumulates, and the more prone responses become to drift. Research on the “lost in the middle” problem shows LLMs struggle to reliably use information from long contexts, meaning key details from earlier in a conversation may be overlooked or mis-weighted as the thread grows.
For brands, this means your visibility can degrade not just across separate conversations, but within a single long research session as user context accumulates and the AI’s ability to maintain consistent citation patterns weakens.
Temporal Discontinuity
Each new conversation instance starts from a different baseline. Memory systems help, but remain imperfect. AI memory works through two mechanisms: explicit saved memories (facts the AI stores) and chat history reference (searching past conversations). Neither provides complete continuity. Even when both are enabled, chat history reference retrieves what seems relevant, not everything that is relevant. And if you’ve ever tried to rely on any system’s memory based on uploaded documents, you know how flaky this can be – whether you give the platform a grounding document or tell it explicitly to remember something, it often overlooks the fact when needed most.
Result: Your brand visibility resets partially or completely with each new conversation timeline.
The Context Carrier Problem
Meet Sarah. She’s planning her family’s summer vacation using ChatGPT Plus with memory enabled.
Monday morning, she asks, “What are the best family destinations in Europe?” ChatGPT recommends Italy, France, Greece, Spain. By evening, she’s deep into Italy specifics. ChatGPT remembers the comparison context, emphasizing Italy’s advantages over the alternatives.
Wednesday: Fresh conversation, and she asks, “Tell me about Italy for families.” ChatGPT’s saved memories include “has children” and “interested in European travel.” Chat history reference might retrieve fragments from Monday: country comparisons, limited vacation days. But this retrieval is selective. Wednesday’s response is informed by Monday but isn’t a continuation. It’s a new timeline with lossy memory – like a JPEG copy of a photograph, details are lost in the compression.
Friday: She switches to Perplexity. “Which is better for families, Italy or Spain?” Zero memory of her previous research. From Perplexity’s perspective, this is her first question about European travel.
Sarah is the “context carrier,” but she’s carrying context across platforms and instances that can’t fully sync. Even within ChatGPT, she’s navigating multiple conversation timelines: Monday’s thread with full context, Wednesday’s with partial memory, and of course Friday’s Perplexity query with no context for ChatGPT at all.
For your hotel brand: You appeared in Monday’s ChatGPT answer with full context. Wednesday’s ChatGPT has lossy memory; maybe you’re mentioned, maybe not. Friday on Perplexity, you never existed. Your brand flickered across three separate realities, each with different context depths, different probability distributions.
Your brand presence is probabilistic across infinite conversation timelines, each one a separate reality where you can strengthen, fade, or disappear entirely.
Why Traditional SEO Thinking Fails
The old model was somewhat predictable. Google’s algorithm was stable enough to optimize once and largely maintain rankings. You could A/B test changes, build toward predictable positions, defend them over time.
That model breaks completely in AI systems:
No Persistent Ranking
Your visibility resets with each conversation. Unlike Google, where position 3 carries across millions of users, in AI, each conversation is a new probability calculation. You’re fighting for consistent citation across discontinuous timelines.
Context Advantage
Visibility depends on what questions came before. Your competitor mentioned in the previous question has context advantage in the current one. The AI might frame comparisons favoring established context, even if your offering is objectively superior.
Probabilistic Outcomes
Traditional SEO aimed for “position 1 for keyword X.” AI optimization aims for “high probability of citation across infinite conversation paths.” You’re not targeting a ranking, you’re targeting a probability distribution.
The business impact becomes very real. Sales training becomes outdated when AI gives different product information depending on question order. Customer service knowledge bases must work across disconnected conversations where agents can’t reference previous context. Partnership co-marketing collapses when AI cites one partner consistently but the other sporadically. Brand guidelines optimized for static channels often fail when messaging appears verbatim in one conversation and never surfaces in another.
The measurement challenge is equally profound. You can’t just ask, “Did we get cited?” You must ask, “How consistently do we get cited across different conversation timelines?” This is why consistent, ongoing testing is critical. Even if you have to manually ask queries and record answers.
The Three Pillars Of Cross-Temporal Consistency
1. Authoritative Grounding: Content That Anchors Across Timelines
Authoritative grounding acts like Marty’s photograph. It’s an anchor point that exists across timelines. The photograph didn’t create his existence, but it proved it. Similarly, authoritative content doesn’t guarantee AI citation, but it grounds your brand’s existence across conversation instances.
This means content that AI systems can reliably retrieve regardless of context timing. Structured data that machines can parse unambiguously: Schema.org markup for products, services, locations. First-party authoritative sources that exist independent of third-party interpretation. Semantic clarity that survives context shifts: Write descriptions that work whether the user asked about you first or fifth, whether they mentioned competitors or ignored them. Semantic density helps: keep the facts, cut the fluff.
A hotel with detailed, structured accessibility features gets cited consistently, whether the user asked about accessibility at conversation start or after exploring ten other properties. The content’s authority transcends context timing.
2. Multi-Instance Optimization: Content For Query Sequences
Stop optimizing for just single queries. Start optimizing for query sequences: chains of questions across multiple conversation instances.
You’re not targeting keywords; you’re targeting context resilience. Content that works whether it’s the first answer or the fifteenth, whether competitors were mentioned or ignored, whether the user is starting fresh or deep in research.
Test systematically: Cold start queries (generic questions, no prior context). Competitor context established (user discussed competitors, then asks about your category). Temporal gap queries (days later in fresh conversation with lossy memory). The goal is minimizing your “fade rate” across temporal instances.
If you’re cited 70% of the time in cold starts but only 25% after competitor context is established, you have a context resilience problem, not a content quality problem.
Stop measuring just citation frequency. Start measuring citation consistency: how reliably you appear across conversation variations.
Traditional analytics told you how many people found you. AI analytics must tell you how reliably people find you across infinite possible conversation paths. It’s the difference between measuring traffic and measuring probability fields.
Key metrics: Search Visibility Ratio (percentage of test queries where you’re cited). Context Stability Score (variance in citation rate across different question sequences). Temporal Consistency Rate (citation rate when the same query is asked days apart). Repeat Citation Count (how often you appear in follow-up questions once established).
Test the same core question across different conversation contexts. Measure citation variance. Accept the variance as fundamental and optimize for consistency within that variance.
What This Means For Your Business
For CMOs: Brand consistency is now probabilistic, not absolute. You can only work to increase the probability of consistent appearance across conversation timelines. This requires ongoing optimization budgets, not one-time fixes. Your KPIs need to evolve from “share of voice” to “consistency of citation.”
For content teams: The mandate shifts from comprehensive content to context-resilient content. Documentation must stand alone AND connect to broader context. You’re not building keyword coverage, you’re building semantic depth that survives context permutation.
For product teams: Documentation must work across conversation timelines where users can’t reference previous discussions. Rich structured data becomes critical. Every product description must function independently while connecting to your broader brand narrative.
Navigating The Timelines
The brands that succeed in AI systems won’t be those with the “best” content in traditional terms. They’ll be those whose content achieves high-probability citation across infinite conversation instances. Content that works whether the user starts with your brand or discovers you after competitor context is established. Content that survives memory gaps and temporal discontinuities.
The question isn’t whether your brand appears in AI answers. It’s whether it appears consistently across the timelines that matter: the Monday morning conversation and the Wednesday evening one. The user who mentions competitors first and the one who doesn’t. The research journey that starts with price and the one that starts with quality.
In “Back to the Future,” Marty had to ensure his parents fell in love to prevent himself from fading from existence. In AI search, businesses must ensure their content maintains authoritative presence across context variations to prevent their brands from fading from answers.
The photograph is starting to flicker. Your brand visibility is resetting across thousands of conversation timelines daily, hourly. The technical factors causing this (probabilistic generation, context dependence, temporal discontinuity) are fundamental to how AI systems work.
The question is whether you can see that flicker happening and whether you’re prepared to optimize for consistency across discontinuous realities.
There’s a divided line in the industry between those who think optimizing for AI is separate from SEO and those who think LLM discovery is just SEO. But, this is an unproductive argument, because whatever you think, LLM inclusion is now part of SEO discovery.
So, let’s just focus on how the search journey works now and where you can find real business value.
To discuss inclusion in LLMs, I invited Patrick Stox to the latest edition of IMHO to find out what he thinks. As product advisor, technical SEO, and brand ambassador at Ahrefs, Patrick has plenty of data to work with and insights into what’s actually working for LLM inclusion right now.
In the face of the AI takeover, Patrick’s take is that Google isn’t going anywhere, and he still thinks human relationships are critical.
With the industry obsessing over ChatGPT, AI Overviews, and AI Mode, it’s easy to assume that traditional search really is dead. However, Patrick was quick to say, “I’m not betting against Google.”
“Google is still everything for most people … Most of the people that are using [LLMs] are tech forward, but the majority of folks are still just Googling things”
Recent Ahrefs data estimated that Google owns an estimated 40% of all traffic to websites, with LLM referrals still a fraction by comparison. Although Google’s share of traffic may be down a couple of percent this year, it still dominates.
After experimenting with ChatGPT and Claude when they first launched, Patrick found himself returning to Google’s AI Mode and Gemini, and thinks others will do the same. “Even I just went back to Google,” he admitted. “I think we’re going to see more of that as they improve their systems.”
Google continues releasing competitive AI innovations, and Patrick predicts these will pull many users back into Google’s ecosystem.
“I’m not betting against Google,” he says. “They’ve got more data than anyone, and they’re still on the bleeding edge.”
The Attribution Problem: LLMs Might Drive Conversions, But We Can’t Prove It
Even though sites are seeing growing referrals from LLMs, establishing attribution to any real value from LLM traffic is a challenge right now. We can talk about brand awareness, but C-Suite is only interested in business value.
Patrick agreed that while you can count mentions and citations in AI answers, that doesn’t easily translate into board-level reporting.
“You can measure how often you’re mentioned versus competitors … but going back to a business, I can’t report on that stuff. It’s all secondary, tertiary metrics.”
For Patrick, revenue and revenue-adjacent metrics still matter. That said, Ahrefs has had some signals from AI search traffic.
“We did track the signups. When I first looked at this data back in July, all the traffic from AI search was half a percent of our traffic total. But at the time, it was 12.1% of our total conversions.” He explained.
This has now dropped below 10%, while the traffic share has grown slightly.
Two Strategies That Are Working For LLM Inclusion
I asked if Ahrefs is actively investing in LLM inclusion, and Patrick said they are trying a number of different things, and the two fundamental approaches that determine LLM visibility are repetition and differentiation.
“Whatever the internet says, that’s kind of what’s being returned in these systems,”
Repetition means ensuring consistent messaging across multiple websites. LLMs synthesize what “the internet says,” so if you want to be recognized for something, that narrative needs to exist broadly. For Ahrefs, this has meant actively spreading the message that they have evolved beyond just SEO tools into a comprehensive digital marketing platform.
Differentiation through original data works alongside the repetition to stand out. Ahrefs has invested heavily in unique data studies throughout the year, including non-English language research. “This data is being heavily cited, heavily returned in these systems because there’s nothing else out there like it,” Patrick explained.
The more surprising tactic that is also currently working is listicles.
“I hate to say it, but listicles … they work right now. I don’t think it’s future-proof at all, but at the same time, I don’t want to just not be there.”
Agentic AI And The Threat Of Closed Systems
I then asked about agentic AI and systems, and does Patrick have concerns about systems becoming closed.
As LLM agents begin booking travel, making purchases, or accessing APIs directly, most likely they would rely on a small set of partners from big brands.
“ChatGPT isn’t going to make deals with unknown companies,” Stox says. “If they book flights, they’ll use major providers. If they use a dictionary, they’ll pick one dictionary.”
This would be the real threat to smaller businesses. “If an agent decides ‘we only check out through Amazon,’ a lot of stores lose sales overnight,” Patrick warns. There is no guaranteed defense. The only strategy we can follow right now is to grow your brand and footprint.
“What was the thing they used to say for Google? Make them embarrassed to not have you included.”
Beyond LLM Optimization: Channels That Still Matter
Patrick emphasized a point that’s possibly been forgotten in the AI hype: “It’s not ChatGPT that’s the second largest search engine, it’s still YouTube by far.”
YouTube has been a hugely successful referral platform for Ahrefs, and the company invested heavily in video. Patrick recommends both long and short-form, for brand discovery.
Community participation on platforms such as Reddit, Slack, and Discord also offers substantial value, but only when companies genuinely participate rather than spam.
While many brands have tried to brute-force Reddit with spam, Patrick says there can be huge value in genuine participation, especially when employees are allowed to represent the company authentically.
“You have literally a paid workforce of advocates who work for your company. Let them go out and talk to people … answer questions, basically advertise for you. They want to do it already. So let them.”
If You Started A Product Today, Where Would You Bet?
As a final question, I asked Patrick where he’d invest if launching a startup today; he did not hesitate to say relationships.
“If I launched a startup, the first thing I’d invest in is relationships. That’s still the most powerful channel … I think if I did do something like that, I’d probably grow it pretty fast. More from my connections than anything else,” he said.
After relationships, he’d focus on YouTube, website content creation, and telling friends about the product. In other words, “just normal marketing.”
“We’ve gone through this tech revolution, and now we’re realizing everything still comes back to direct connections with people.”
And that may be the most important insight of all. In an era of AI-driven discovery, the brands that win are the ones that remain unmistakably human.
Watch the full video interview with Patrick Stox here:
Thank you to Patrick Stox for offering his insights and being my guest on IMHO.
More Resources:
Featured Image: Shelley Walsh/Search Engine Journal
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Meet the man building a starter kit for civilization
You live in a house you designed and built yourself. You rely on the sun for power, heat your home with a woodstove, and farm your own fish and vegetables. The year is 2025.
This is the life of Marcin Jakubowski, the 53-year-old founder of Open Source Ecology, an open collaborative of engineers, producers, and builders developing what they call the Global Village Construction Set (GVCS).
It’s a set of 50 machines—everything from a tractor to an oven to a circuit maker—that are capable of building civilization from scratch and can be reconfigured however you see fit. It’s all part of his ethos that life-changing technology should be available to all, not controlled by a select few.Read the full story.
—Tiffany Ng
This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.
What it’s like to find yourself in the middle of a conspiracy theory
Last week, we held a subscribers-only Roundtables discussion exploring how to cope in this new age of conspiracy theories. Our features editor Amanda Silverman and executive editor Niall Firth were joined by conspiracy expert Mike Rothschild, who explained exactly what it’s like to find yourself at the center of a conspiracy you can’t control. Watch the conversation back here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 DOGE has been disbanded Even though it’s got eight months left before its official scheduled end. (Reuters) + It leaves a legacy of chaos and few measurable savings. (Politico) + DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)
2 How OpenAI’s tweaks to ChatGPT sent some users into delusional spirals It essentially turned a dial that increased both usage of the chatbot and the risks it poses to a subset of people. (NYT $) + AI workers are warning loved ones to stay away from the technology. (The Guardian) + It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)
3 A three-year old has received the world’s first gene therapy for Hunter syndrome Oliver Chu appears to be developing normally one year after starting therapy. (BBC)
4 Why we may—or may not—be in an AI bubble It’s time to follow the data. (WP $) + Even tech leaders don’t appear to be entirely sure. (Insider $) + How far can the ‘fake it til you make it’ strategy take us? (WSJ $) + Nvidia is still riding the wave with abandon. (NY Mag $)
5 Many MAGA influencers are based in Russia, India and Nigeria X’s new account provenance feature is revealing some interesting truths. (The Daily Beast)
6 The FBI wants to equip drones with facial recognition tech Civil libertarians claim the plans equate to airborne surveillance. (The Intercept) + This giant microwave may change the future of war. (MIT Technology Review)
7 Snapchat is alerting users ahead of Australia’s under-16s social media ban The platform will analyze an account’s “behavioral signals” to estimate a user’s age. (The Guardian) + An AI nudification site has been fined for skipping age checks. (The Register) + Millennial parents are fetishizing the notion of an offline childhood. (The Observer)
8 Activists are roleplaying ICE raids in Fortnite and Grand Theft Auto It’s in a bid to prepare players to exercise their rights in the real world. (Wired $) + Another effort to track ICE raids was just taken offline. (MIT Technology Review)
9 The JWST may have uncovered colossal stars In fact, they’re so big their masses are 10,000 times bigger than the sun. (New Scientist $) + Inside the hunt for the most dangerous asteroid ever. (MIT Technology Review)
10 Social media users are lying about brands ghosting them Completely normal behavior. (WSJ $) + This would never have happened on Vine, I’ll tell you now. (The Verge)
Quote of the day
“I can’t believe we have to say this, but this account has only ever been run and operated from the United States.”
—The US Department of Homeland Security’s X account attempts to end speculation surrounding its social media origins, the New York Times reports.
One more thing
This company is planning a lithium empire from the shores of the Great Salt Lake
On a bright afternoon in August, the shore of Utah’s Great Salt Lake looks like something out of a science fiction film set in a scorching alien world.
This otherworldly scene is the test site for a company called Lilac Solutions, which is developing a technology it says will shake up the United States’ efforts to pry control over the global supply of lithium, the so-called “white gold” needed for electric vehicles and batteries, away from China.
The startup is in a race to commercialize a new, less environmentally-damaging way to extract lithium from rocks. If everything pans out, it could significantly increase domestic supply at a crucial moment for the nation’s lithium extraction industry. Read the full story.
—Alexander C. Kaufman
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ I love the thought of clever crows putting their smarts to use picking up cigarette butts (thanks Alice!) + Talking of brains, sea urchins have a whole lot more than we originally suspected. + Wow—a Ukrainian refugee has won an elite-level sumo competition in Japan. + How to make any day feel a little bit brighter.