A new Google-Ipsos report shows AI adoption is increasing globally, especially in emerging markets.
However, the study reveals challenges like regional divides, gender disparities, and slower adoption in developed countries.
Critics, including Nate Hake, founder of Travel Lemming, point out how Google overlooks these challenges in its report coverage.
Takeaways of Google’s AI Ipsos survey when you look through the PR spin 👓👇
1) 71% of Americans did not even use Generative AI in 2024 2) 58% of Americans think AI is unlikely to benefit them 3) There is a concerning gender gap in AI usage 4) US society is deeply apprehensive… https://t.co/dSZEtXsDoG
While optimism around AI is rising, it’s not resonating with everyone.
Here’s a closer look at the report and what the numbers indicate.
AI Is Growing, But Unevenly
Globally, 48% of people used generative AI last year, with countries like Nigeria, Mexico, and South Africa leading adoption. These regions also show the most excitement about AI’s potential to boost economies and improve lives.
Adoption lags at 29% in developed nations like the U.S. and Canada, meaning that 71% of people in these regions haven’t knowingly engaged with generative AI tools.
Screenshot: Google-Ipsos Study ‘Our life with AI: From innovation to application,’ January 2025.
Optimism Outweighs Concerns
Globally, 57% of people are excited about AI, compared to 43% who are concerned—a shift from the year prior, when excitement and concerns were evenly split.
People cite AI’s potential in science (72%) and medicine (71%) as reasons for their optimism. Respondents see opportunities for breakthroughs in healthcare and research.
However, in the U.S., skepticism lingers—only 52% believe AI will directly benefit “people like them,” compared to the global average of 59%.
Gender Gaps Persist
The report highlights a gender gap in AI usage: 55% of global AI users are men compared to 45% women.
The disparity is even bigger in workplace adoption, where 41% of professional AI users are women.
Emerging Markets Are Leading the Way
Emerging markets are using AI more and are more optimistic about its potential.
In regions like Nigeria and South Africa, people are more likely to believe AI will transform their economies.
Meanwhile, developed countries like the U.S. and U.K. remain cautious.
Only 53% of Americans prioritize AI innovation, compared to much higher enthusiasm in emerging markets.
Non-Generative AI
While generative AI tools like chatbots and content generators grab headlines, the public is more appreciative of non-generative AI applications.
These include AI for healthcare, fraud detection, flood forecasting, and other practical, high-impact use cases.
Generative AI, on the other hand, gets mixed reviews.
Writing, summarizing, or customer service applications don’t resonate as strongly with the public as AI’s potential to tackle bigger societal issues.
AI at Work: Young, Affluent, and Male-Dominated
AI is making its way into the workplace. 74% of AI users use it professionally for writing, brainstorming, and problem-solving tasks.
However, workplace AI adoption is skewed toward younger, wealthier, and male workers.
Blue-collar workers and older professionals are catching up—67% of blue-collar AI users and 68% of workers aged 50-74 use AI at work—but the gender gap remains pronounced.
Trust in AI Is Growing
Trust in AI governance is improving, with 61% of people confident their governments can regulate AI responsibly (up from 57% in 2023).
72% support collaboration between governments and companies to manage AI’s risks and maximize its benefits.
Takeaway
AI use is growing worldwide, though many people in North America still see little reason to use it.
To increase AI’s adoption, companies must build trust and clearly communicate the technology’s benefits.
A sharp-eyed Australian SEO spotted indirect confirmation about Google’s use of AI detection as part of search rankings that was hiding in plain sight for years. Although Google is fairly transparent about content policies, the new data from a Googler’s LinkedIn profile adds a little more detail.
“Important FYI Googler Chris Nelson from Search Quality team his LinkedIn says He manages global team that build ranking solutions as part of Google Search ‘detection and treatment of AI generated content’.”
Googler And AI Content Policy
The Googler, Chris Nelson, works at Google in the Search Ranking department and is listed as co-author of Google’s guidance on AI-generated content, which makes knowing a little bit about him
The relevant work experience at Google is listed as:
“I manage a large, global team that builds ranking solutions as part of Google Search and direct the following areas:
-Prevent manipulation of ranking signals (e.g., anti-abuse, spam, harm) -Provide qualitative and quantitative understanding of quality issues (e.g., user interactions, insights) -Address novel content issues (e.g., detection and treatment of AI-generated content) -Reward satisfying, helpful content”
There are no search ranking related research papers or patents listed under his name but that’s probably because his educational background is in business administration and economics.
What may be of special interest to publishers and digital marketers are the following two sections:
1. He lists addressing “detection and treatment of AI-generated content”
2. He provides “qualitative and quantitative understanding of quality issues (e.g., user interactions, insights)”
While the user interaction and insights part might seem unrelated to the detection and treatment of AI-generated content, the user interactions and insights part is in the service of understanding search quality issues, which is related.
His role is defined as evaluation and analysis of quality issues in Google’s Search Ranking department. “Quantitative understanding” refers to analyzing data and “qualitative understanding” is a more subjective part of his job that may be about insights, understanding the “why” and “how” of observed data.
Co-Author Of Google’s AI-Generated Content Policy
Chris Nelson is listed as a co-author of Google’s guidance on AI-generated content. The guidance doesn’t prohibit the use of AI for published content, suggesting that it shouldn’t be used to create content that violates Google’s spam guidelines. That may sound contradictory because AI is virtually synonymous with scaled automated content which has historically been considered spam by Google.
The answers are in the nuance of Google’s policy, which encourages content publishers to prioritize user-first content instead of a search-engine first approach. In my opinion, putting a strong focus on writing about the most popular search queries in a topic, instead of writing about the topic, can lead to search engine-first content as that’s a common approach of sites I’ve audited that contained relatively high quality content but lost rankings in the 2024 Google updates.
Google (and presumably Chris Nelson’s advice) for those considering AI-generated content is:
“…however content is produced, those seeking success in Google Search should be looking to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.”
Why Doesn’t Google Ban AI-Generated Content Outright?
Google’s documentation that Chris Nelson co-authored states that automation has always been a part of publishing, such as dynamically inserting sports scores, weather forecasts, scaled meta descriptions and date-dependent content and products related to entertainment.
The documentation states:
“…For example, about 10 years ago, there were understandable concerns about a rise in mass-produced yet human-generated content. No one would have thought it reasonable for us to declare a ban on all human-generated content in response. Instead, it made more sense to improve our systems to reward quality content, as we did.
…Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. …Automation has long been used in publishing to create useful content. AI can assist with and generate useful content in exciting new ways.”
Why Does Googler Detect AI-Generated Content?
The documentation that Nelson co-authored doesn’t explicitly states that Google doesn’t differentiate between how low quality content is generated, which seemingly contradicts his LinkedIn profile that states “detection and treatment of AI-generated content” is a part of his job.
The AI-generated content guidance states:
“Poor quality content isn’t a new challenge for Google Search to deal with. We’ve been tackling poor quality content created both by humans and automation for years. We have existing systems to determine the helpfulness of content. …Our systems continue to be regularly improved.”
How do we reconcile that part of his job is detecting AI-generated content and Google’s policy states that it doesn’t matter how low quality content is generated?
Context is everything, that’s the answer. Here’s the context of his work profile:
“Address novel content issues (e.g., detection and treatment of AI-generated content)”
The phrase “novel content issues” means content quality issues that haven’t previously been encountered by Google. This refers to new types of AI-generated content, presumably spam, and how to detect it and “treat” it. Given that the context is “detection and treatment” it could very well be that the context is “low quality content” but it wasn’t expressly stated because he probably didn’t think his LinkedIn profile would be parsed by SEOs for a better understanding of how Google detects and treats AI-generated content (meta!).
Guidance Authored By Chris Nelson Of Google
A list of articles published by Chris Nelson show that he may have played a role in many of the most important updates from the past five years, from the Helpful Content update, site reputation abuse to detecting search-engine first AI-generated content.
Google has reportedly told the EU it won’t add fact-checking to search results or YouTube videos, nor will it use fact-checks to influence rankings or remove content.
This decision defies new EU rules aimed at tackling disinformation.
Google Says No to EU’s Disinformation Code
In a letter to Renate Nikolay of the European Commission, Google’s global affairs president, Kent Walker, said fact-checking “isn’t appropriate or effective” for Google’s services.
The EU’s updated Disinformation Code, part of the Digital Services Act (DSA), would require platforms to include fact-checks alongside search results and YouTube videos and to bake them into their ranking systems.
Walker argued Google’s current moderation tools—like SynthID watermarking and AI disclosures on YouTube—are already effective.
He pointed to last year’s elections as proof Google can manage misinformation without fact-checking.
Google also confirmed it plans to fully exit all fact-checking commitments in the EU’s voluntary Disinformation Code before it becomes mandatory under the DSA.
Context: Major Elections Ahead
This refusal from Google comes ahead of several key European elections, including:
Germany’s Federal Election (Feb. 23)
Romania’s Presidential Election (May 4)
Poland’s Presidential Election (May 18)
Czech Republic’s Parliamentary Elections (Sept.)
Norway’s Parliamentary Elections (Sept. 8)
These elections will likely test how well tech platforms handle misinformation without stricter rules.
Tech Giants Backing Away from Fact-Checking
Google’s decision follows a larger trend in the industry.
Last week, Meta announced it would end its fact-checking program on Facebook, Instagram, and Threads and shift to a crowdsourced model like X’s (formerly Twitter) Community Notes.
Elon Musk has drastically reduced moderation efforts on X since buying the platform in 2022.
What It Means
As platforms like Google and Meta move away from active fact-checking, concerns are growing about how misinformation will spread—especially during elections.
While tech companies say transparency tools and user-driven features are enough, critics argue they’re not doing enough to combat disinformation.
Google’s pushback signals a growing divide between regulators and platforms over how to manage harmful content.
You can sell your products online in many marketplaces, and all these platforms benefit from SEO. From improving your photos to writing better product descriptions, on-page SEO is key if you want your product listings to rank in search.
Table of contents
What is marketplace SEO?
Marketplace SEO is about making a platform with many sellers — like Airbnb, Etsy, or Amazon — easy to find on search engines. It means improving product details, organizing the site well, and using customer reviews and content. Optimizing the listings on these platforms helps more people find the products and increases sales for everyone involved.
For instance, Etsy gives store owners plenty of options to improve their product listings by writing detailed product descriptions or adding great images. As a result, every well-optimized, handcrafted item ranks in the search results. Similarly, Airbnb allows property owners to update their listings with local keywords so that they appear when users search for accommodations in specific areas.
Effective marketplace SEO means that when someone searches for “vintage leather bags,” Etsy listings are among the top results. Or when a traveler looks for “secluded cabin in Vermont,” Airbnb’s optimized listings pop up. The result, of course, is to quickly connect marketplace buyers and sellers.
An Airbnb listing appearing in a Google search for a key term
On-page SEO for marketplaces
You can use on-page SEO as a tool for marketplace SEO. Doing so directly impacts how easily potential customers find your listings. Take the time to optimize elements like titles, descriptions, and images. Ultimately, you’ll find that your marketplace listings will be more visible in search results. This visibility, of course, should hopefully lead to more clicks and more sales.
When each page is optimized correctly, search engines better understand your content, which improves your rankings. This means your marketplace appears in front of the right audience at the right time.
Optimizing images
For marketplace SEO, you need good images to capture attention and show the quality of your products. If possible, make sure that your images are high-resolution and professionally shot. All your images should be relevant and showcase the product from more than one angle to give buyers a good feel for it.
Add clear alt text like “handmade ceramic mug” to make your images more accessible and to help search engines recognize what the image shows. This improves the SEO of your marketplace listings and makes them more appealing to users who use speech-based browsers.
Don’t forget to optimize the image itself. Compress files to maintain quality while reducing load times, as slow-loading images can drive users away. Tools like Squoosh can help with compression without losing quality. Ensure file names are descriptive and include keywords where appropriate, such as “organic-cotton-shirt.jpg,” to boost search visibility further.
Great photography and branding make your products stand out
Create great titles and meta descriptions
Titles and meta descriptions can draw users from search engine results. Crafting an engaging title means more than just adding keywords; it should capture what the page is about. Keep it short but descriptive, like “Elegant Summer Dresses – Affordable Fashion Online.” This approach catches users’ attention and clearly tells them what to expect. Ensure the titles aren’t too long so they appear fully in search results without getting cut off.
Descriptions provide more details about your listing. A good meta description should highlight features and benefits. It should also mention why people should click on your listing instead of your competitors. Here’s a good example: “Discover our range of elegant summer dresses crafted from breathable fabrics for comfort and style. Perfect for any occasion, at prices you’ll love.” This has all the keywords, and it speaks to the customer. Keep it short, accurate and engaging.
Writing great product descriptions
Product descriptions sell your products. If you want to sell your products, you must invest plenty of time in writing great product descriptions. Start by understanding your target audience and use those insights to address their needs and concerns. Write naturally and incorporate all the keywords you want the product to rank for.
Many sellers simply list features, but explaining these in context is better. Here’s an example: “This eco-friendly bamboo toothbrush is designed for comfort and sustainability, reducing your carbon footprint while providing a gentle clean.” By writing this way, you set your products apart from your competitors.
Another way to make your products stand out is through storytelling or vivid language that paints a picture. Here’s one: “Imagine sipping your morning coffee from this handcrafted ceramic mug, its unique glaze reflecting the artisan’s touch.” Instead of simply listing details, you appeal to the buyer’s emotions. This also offers a way to make the shopping experience more personal.
Whatever you do, try to avoid using the text provided by the manufacturer, as these will be used on thousands of sites. Try to make it unique for you and your customers.
An example of a good product description for an Etsy item
Using product specifications properly
You must list specifications to help customers decide if they want to buy this specific product. It’s important to list these product specifications with care. Make sure to present details such as product identifiers, dimensions, materials, weight, and compatibility in a way that’s easy to scan.
This makes it easy for customers to find the information they need quickly. It also helps search engines index your content more effectively. Including specifications can give customers more confidence in buying the product, as they know exactly what they are purchasing.
Additionally, think about the unique features that specifications can highlight. For instance, computer shoppers would love to know what kind of processor a laptop has. Shoppers use these details to compare products based on specific technical attributes. Where applicable, use bullet points to present data clearly and concisely.
Implementing Schema markup
You can’t live without a schema markup if you want to improve your marketplace SEO. Schema markup adds structured data to your products, which helps search engines understand and interpret your products. If you do it well, your products might get rich results, like added star ratings, pricing, and availability.
For instance, an item with a product and reviews markup might show a 4.8-star rating and price directly in Google. These listings appeal to customers and could lead to a better CTR.
If you want to add schema to your products, you need to use specific types of markup relevant to your marketplace, such as Product, Review, and Offer schemas. Tools like Yoast SEO can implement schema code for you automatically. Whatever you do, don’t forget to test your structured data to make sure it’s correctly implemented and up-to-date.
Properly set up products with structured data will get rich results in Google
Use Yoast SEO for Shopify
Working on your online store can be tiresome, but luckily, there are tools to help you. Yoast SEO for Shopify is a great app that makes managing it much easier. It has content optimization features, AI tools for creating awesome titles and meta descriptions, and many schema enhancements. Together, these features help your store’s visibility in search
Conclusion to marketplace SEO
For your marketplace to succeed, you need on-page SEO. Work on product visibility and engagement by improving images, creating clear meta tags, and refining product details. Tools like Yoast SEO for Shopify simplify this process. These methods can attract more visitors and improve your marketplace’s performance.
Edwin is an experienced strategic content specialist. Before joining Yoast, he worked for a top-tier web design magazine, where he developed a keen understanding of how to create great content.
Programmatic SEO is an approach to SEO and content creation that leverages automation and technology to efficiently create, optimize, and manage a large volume of webpages.
It’s particularly useful for websites that require thousands, or even millions, of pages to rank for diverse search queries.
Ecommerce giants like Amazon or travel websites like Expedia rely on programmatic SEO to dynamically generate pages for every product, location, or service they offer.
The power of programmatic SEO lies in its ability to handle such scale while maintaining a focus on relevant keywords, content structure, and user intent.
Defining Your Objectives
Before starting with programmatic SEO, define your goals.
Clear goals guide your strategy and measure success.
Set KPIs: Use metrics like traffic growth, conversions, and rankings to track progress.
Find Opportunities: Research your industry and competitors to uncover untapped keywords or markets.
Prioritize User Intent: Create content that answers questions and solves user problems.
Programmatic Keyword Research
In traditional keyword research, the goal is often to identify high-search volume keywords that can drive significant traffic to a website.
However, these keywords usually come with high competition, making it challenging for newer or smaller sites to rank well in search engine results.
Programmatic SEO takes a different approach by targeting low-search volume and low-competition, long-tail keywords.
This strategy focuses on creating a large number of pages optimized for specific queries, allowing you to rank higher more easily and attract a highly targeted audience.
Keywords in programmatic SEO consist of two main components:
Head Terms
Head terms are broad keywords that describe a general topic or category. Head terms often have the following characteristics:
High average monthly search volumes.
Tend to be “short tail.”
Have multiple common interpretations.
Tend to be more stable SERPs with a lot of competition targeting the query.
Examples include keywords such as “onboarding software,” “winter sun vacations,” or “crm software.”
Modifiers
Modifiers are words or phrases that add specificity to head terms, and will vary greatly between sectors.
Modifiers are easily identifiable as they follow patterns, which again vary between different sectors.
Common modifier patterns include:
“for SaaS.”
“for staffing agencies.”
“for accountants.”
“best practices.”
“2025.”
In contrast to head terms, aside from occasional spikes in traffic, average monthly search volumes tend to be lower, but when combined with head terms, they create more targeted queries with more focused intent. It helps capture visibility with niche audiences and consumers who may be showing intent.
Combined with head terms, we tend to call these “long-tail” keywords.
Reliable Datasets
To scale programmatic SEO effectively, you need a reliable dataset that can generate unique, valuable, and relevant pages.
Depending on the types of pages you’re creating, you need to understand and anticipate the change frequency of the data, and how your infrastructure will handle the changes.
Many platforms provide APIs that you can use to fetch structured data.
You can develop scalable, template-based pages that enhance relevance for users and search engines by leveraging these clusters.
This approach allows for efficiency and customization while aligning with search intent.
Clustering also allows for more seamless automation and reduces the potential to create large swathes of pages with near-duplicate intents and purposes.
1. Categorize By Intent
Start by grouping keywords according to their search intent. This ensures your content addresses specific user needs, such as:
Informational: Answering questions or providing knowledge. Example: What are the best coffee shops in Boston?
Transactional: Enabling actions like purchases or bookings. Example: Order coffee beans online in Boston.
Navigational: Helping users locate specific places or brands. Example: Starbucks locations in Boston.
2. Define Pages Based On Patterns
Once you’ve categorized keywords, identify common patterns to create flexible templates. This strategy helps structure content consistently across multiple pages.
Location-Specific Templates:
Format: [Category] in [Location].
Example: Hotels in Paris.
Feature-Specific Templates:
Format: [Product] with [Feature].
Example: Smartphones with best cameras.
Use Case-Specific Templates:
Format: [Service] for [Audience/Use Case].
Example: CRMs for hospitality industry.
3. Expand With Modifiers
Enhance clusters by incorporating commonly searched modifiers to make the content more comprehensive:
Price-Related Modifiers: Add terms like cheap, affordable, or luxury.
Time-Related Modifiers: Include phrases such as “near me now” or “open late.”
Specific Features: Highlight characteristics like “with a pool,” “pet-friendly,” or “free delivery.”
4. Combine Variations
Use combinations of templates, categories, and modifiers to address long-tail keywords and niche queries. Examples include:
Pet-friendly hotels in Chicago with free breakfast.
Best Italian restaurants in New York open late.
Programmatic SEO relies on automated systems to generate content at scale, reducing the manual workload involved in traditional SEO efforts.
Automation allows businesses to rapidly create pages that address various user needs, ensuring coverage of broad and highly specific search terms.
Programmatic SEO Challenges
Programmatic SEO can offer tremendous scalability and efficiency, but it’s not always the right approach for every website.
A manual SEO strategy may be a better fit for small sites or those requiring significant customization.
However, when using programmatic SEO, it’s important to address potential challenges to ensure success.
Over-Prioritizing Keywords
Automation should never compromise the quality of the user experience. Pages must provide meaningful, accurate, and engaging content that answers user queries effectively.
Overemphasizing keywords can result in content that feels unnatural or overly optimized. This can harm user experience and reduce click-through rates.
Avoid stuffing keywords and instead prioritize readability and relevance. Ensure your content provides value by answering user queries comprehensively.
Crawlability And Indexing Issues
Large websites with programmatically generated pages can face challenges with crawlability and indexing. If pages lack structure or unique value, Google may struggle to index them properly.
To alleviate these issues, aside from improving the overall page quality to show unique value and beneficial purpose, you can optimize:
Internal Linking: Implement a robust internal linking strategy to help search engines discover and prioritize pages.
Backlinks: Acquire backlinks to improve visibility and encourage indexing.
Sitemaps: Use an XML sitemap and adhere to Google’s limit of 50,000 URLs per sitemap. Organize your sitemap logically by grouping pages into categories or themes.
These steps will enhance crawlability, and potentially move the pages above Google’s indexing threshold. If the issues persist, the focus should be on content pruning and improving quality.
Thin Content
Thin content lacks value for users and doesn’t satisfy Google’s quality standards. Pages with minimal or irrelevant information are unlikely to rank well.
Thin content can be addressed in several ways:
Remove Low-Value Content: Eliminate outdated or irrelevant pages that offer little benefit to users or your SEO strategy.
Improve Content Quality: Add meaningful text, descriptive captions, and relevant multimedia like images or videos.
Consolidate Pages: Merge thin content pages into a single, comprehensive piece to increase relevance and depth.
Final Thoughts
When implemented effectively, programmatic SEO can drive significant organic traffic, expand market reach, and establish a competitive edge.
However, achieving success requires a thoughtful balance of strategic planning, technical optimization, and a commitment to delivering value to users.
Knowledge graphs have existed for a long time and have proven valuable across social media sites, cultural heritage institutions, and other enterprises.
A knowledge graph is a collection of relationships between entities defined using a standardized vocabulary.
It structures data in a meaningful way, enabling greater efficiencies and accuracies in retrieving information.
LinkedIn, for example, uses a knowledge graph to structure and interconnect data about its members, jobs, titles, and other entities. It uses its knowledge graph to enhance its recommendation systems, search features, and other products.
Google’s knowledge graph is another well-known knowledge graph that powers knowledge panels and our modern-day search experience.
In recent years, content knowledge graphs, in particular, have become increasingly popular within the marketing industry due to the rise of semantic SEO and AI-driven search experiences.
What Is A Content Knowledge Graph?
A content knowledge graph is a specialized type of knowledge graph.
It is a structured, reusable data layer of the entities on your website, their attributes, and their relationship with other entities on your website and beyond.
In a content knowledge graph, the entities on your website and their relationships can be defined using a standardized vocabulary like Schema.org and expressed as Resource Description Framework (RDF) triples.
RDF triples are represented as “subject-predicate-object” statements, and they illustrate how an entity (subject) is related to another entity or a simple value (object) through a specific property (predicate).
For example, I, Martha van Berkel, work for Schema App. This is stated in plain text on our website, and we can use Schema.org to express this in JSON-LD, which allows machines to understand RDF statements about entities.
Image showing how content gets translated into Schema.org using JSON-LD, which forms a connected graph of RDF triples (Image from author, November 2024)
Your website content is filled with entities that are related to each other.
When you use Schema Markup to describe the entities on your site and their relationships to other entities, you essentially express them as RDF triples that form your content knowledge graph.
But before you start building a content knowledge graph, you should understand why you’re building one and how your team can benefit from it.
Content Knowledge Graphs Drive Semantic Understanding For Search Engines
Over the past few years, search engines have shifted from lexical to semantic search. This means less matching of keywords and more matching of relevant entities.
Your content knowledge graph showcases all the relationships between the entities on your website and across the web, which provides search engines with greater context and understanding of topics and entities mentioned on your website.
You can also connect the entities within your content knowledge graph with known entities found in external authoritative knowledge bases like Wikipedia, Wikidata, and Google’s Knowledge Graph.
This is known as entity linking, and it can add even more context to the entities mentioned on your site, further disambiguating them.
Example of linking an entity to external authoritative knowledge bases using Schema Markup (Image from author, November 2024)
Your content knowledge graph ultimately enables search engines to explicitly understand the relevance of your content to a user’s search query, leading to more precise and useful search results for users and qualified traffic for your organization.
Content Knowledge Graphs Can Reduce AI Hallucinations
Beyond SEO, content knowledge graphs are also crucial for improving AI performance. As businesses adopt more AI technologies like AI chatbots, combatting AI hallucination is now a key factor to success.
While large language models (LLMs) can use patterns and probabilities to generate answers, they lack the ability to fact-check, resulting in erroneous or speculative answers.
Content knowledge graphs, on the other hand, are built from reliable data sources like your website, ensuring the credibility and accuracy of the information.
This means that the content knowledge graph you’ve built to drive SEO can also be reused to ground LLMs in structured, verified, domain-specific knowledge, reducing the risk of hallucinations.
A recent research done by data.world has shown that using a knowledge graph of the enterprise SQL database increases accuracy to 54% (from 16%).
Content knowledge graphs are rooted in factual information about entities related to your organization, making them a great data source for content insights.
Content Knowledge Graphs Can Drive Content Strategies
High-quality content is one of the cornerstones of great SEO. However, content marketers are often challenged with figuring out where the gaps are in their existing content about the entities and topics they want to drive traffic for.
Content knowledge graphs have the ability to provide content teams with a holistic view of their entities to get useful insights to inform their content strategy. Let’s dive deeper.
Get A Holistic View Of Entities Across Your Content
Traditionally, content marketing teams would manually audit or use a spreadsheet or relational database (tables, rows, and columns) to manage their content. The issue with a relational database is its lack of semantic meaning.
For example, a table could capture the title, URL, author, meta description, word count, and topic of an article. However, it cannot capture entities mentioned in a plain-text article.
If you want to know which pages on your website currently mention an old product you no longer provide, identifying these pages is hard and very manual.
Content knowledge graphs, on the other hand, provide a multi-dimensional categorization system for your content.
When built using the Schema.org vocabulary, the detailed types and properties enable you to capture the connections between different content pieces based on entities and taxonomy.
These properties connect your blog article (an entity) to other entities you’ve defined on your site. The author of a specific article is a Person who you might have defined on an Author page.
Your article might mention a product or service that you’ve defined on other pages on your site.
Example of a content knowledge graph that shows how a blog post is connected to other entities through the Schema.org properties (Image from author, November 2024)
For marketing teams that have to manage large volumes of content, structuring your content into a content knowledge graph can give you a more holistic view of your content and entities.
You can easily perform a content audit to find out what exists on your website without manually auditing the site or updating a spreadsheet.
This, in return, enables you to perform content analysis with ease and get deeper insights into your content.
Get Deeper Insight Into Your Content
With a holistic view provided by your content knowledge graph, you can easily audit your content and entities to identify gaps and opportunities to improve your content strategy.
Example 1: You want to strengthen your E-E-A-T for specific authors on your site. Your content knowledge graph will showcase:
All the content this author has created, edited, or contributed to.
How the author is related to your organization and other acclaimed entities.
The author’s role, job title, awards, credentials, and certifications.
This unified view can provide your team with a broad overview of this author and identify content opportunities to improve the author’s topical authority on your site.
Example 2: Your organization wants to remove all mentions of COVID-19 protocols from your website.
You can query your content knowledge graph to identify past content that mentions the topic “COVID-19” and assess the relevance and necessity of each mention before removing it from your content.
This targeted approach can enable your team to refine their content without investing too much time in manual reviews.
Since content knowledge graphs built using Schema.org are expressed as RDF triples, you can use the query language SPARQL to find out which pages a specific entity is mentioned in or how much content you have on a specific entity or topic.
This will help your team answer strategic questions such as:
Which entities are unrepresented in your website content?
Where can additional content be created to improve entity coverage?
What existing content should be improved?
Beyond its SEO and AI benefits, content knowledge graphs have the potential to help content marketing teams perform content analysis with greater efficiency and accuracy.
It’s Time To Start Investing In Content Knowledge Graphs
Today, content knowledge graphs represent a shift from thinking of creating content as a content manager’s job to the opportunity for SEO professionals to create an interconnected content data source that answers questions and identifies opportunities for the content team.
It is a crucial technology for organizations looking to differentiate themselves in an increasingly complex digital landscape.
Investing in content knowledge graphs now positions your organization at the forefront of SEO and content optimization, giving you the tools to navigate tomorrow’s challenges.
And it all starts with implementing semantic schema markup on your site.
The Dot AI domain has migrated to a new domain name registry, giving all registrants of .AI domains stronger security and more stability, with greater protection against outages.
Dot AI Domain
.AI is a country-code top-level domain (ccTLD), which is distinct from a gTLD. A CCTLD is a two letter domain that is reserved for a specific country, like .US is reserved for the United States of America. .AI is reserved for the British Overseas Territory in the Caribbean, Anguilla.
.AI Is Now Handled By Identity Digital
The .AI domain was previously handled by a local small business named DataHaven.net but has now fully migrated to the Identity Digital platform, giving the .AI domain availability from over 90% of all registrars worldwide and a 100% availability guarantee. The migration also provides fast distribution of the .AI domain in milliseconds and greater resistance to denial of service attacks.
According to the announcement:
“Beginning today, .AI is exclusively being served on the Identity Digital platform, and we couldn’t be more thrilled for what this means for Anguilla.
The quick migration brings important enhancements to the .AI TLD like 24/7 global support, and a growing list of features that will benefit registrars, businesses and entrepreneurs today and in the years to come.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
China wants to restore the sea with high-tech marine ranches
A short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex.
Genghai is in fact an unusual tourist destination, one that breeds 200,000 “high-quality marine fish” each year. The vast majority are released into the ocean as part of a process known as marine ranching.
The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.
—Matthew Ponsford
This story is from the latest print edition of MIT Technology Review—it’s all about the exciting breakthroughs happening in the world right now. If you don’t already, subscribe to receive future copies.
Generative AI is causing a paradigm shift in how robots are trained. It’s now clear how we might finally build the sort of truly capable robots that have for decades remained the stuff of science fiction.
A few years ago, roboticists began marveling at the progress being made in large language models. Makers of those models could feed them massive amounts of text—books, poems, manuals—and then fine-tune them to generate text based on prompts.
It’s one thing to use AI to create sentences on a screen, but another thing entirely to use it to coach a physical robot in how to move about and do useful things. Now, roboticists have made major breakthroughs in that pursuit. Read the full story.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 US regulators are suing Elon Musk | For allegedly violating securities law when he bought Twitter in 2022. (NYT $) + The case claims that Musk continued to buy shares at artificially low prices. (FT $) + Musk is unlikely to take it lying down. (Politico)
2 SpaceX has launched two private missions to the moon Falling debris from the rockets has forced Qantas to delay flights. (The Guardian) + The airline has asked for more precise warnings around future launches. (Semafor) + Space startups are on course for a funding windfall. (Reuters) + What’s next for NASA’s giant moon rocket? (MIT Technology Review)
3 Home security cameras are capturing homes burning down in LA Residents have remotely tuned into live footage of their own homes burning. (WP $) + California’s water scarcity is only going to get worse. (Vox) + How Los Angeles can rebuild in the wake of the devastation. (The Atlantic $)
4 ChatGPT is about to get much more personal Including reminding you about walking the dog. (Bloomberg $)
5 Inside the $30 million campaign to liberate social media from billionaires Free Our Feeds wants to restructure platforms around open-source tech. (Insider $)
6 How to avoid getting sick right now You probably already own one of the best defenses. (The Atlantic $) + But coughs and sneezes could be the least of our problems. (The Guardian)
7 The US and China are still collaborating on AI research Despite rising tensions between the countries. (Rest of World)
8 These startups think they have the solution to loneliness Making friends isn’t always easy, but these companies have some ideas. (NY Mag $)
9 Here are just some of the ways the universe could end Don’t say I didn’t warn you. (Ars Technica) + But at least Earth is probably safe from a killer asteroid for 1,000 years. (MIT Technology Review)
10 AI is inventing impossible languages They could help us learn more about how humans learn. (Quanta Magazine) + These impossible instruments could change the future of music. (MIT Technology Review)
Quote of the day
“If you can get away with it when it’s front-page news, why bother to comply at all?”
—Marc Fagel, a former director of the SEC’s San Francisco office, suggests the agency’s decision to sue Elon Musk is intended as a deterrent to others, the Wall Street Journal reports.
The big story
I took an international trip with my frozen eggs to learn about the fertility industry
September 2022
—Anna Louie Sussman
Like me, my eggs were flying economy class. They were ensconced in a cryogenic storage flask packed into a metal suitcase next to Paolo, the courier overseeing their passage from a fertility clinic in Bologna, Italy, to the clinic in Madrid, Spain, where I would be undergoing in vitro fertilization.
The shipping of gametes and embryos around the world is a growing part of a booming global fertility sector. As people have children later in life, the need for fertility treatment increases each year.
After paying for storage costs for six and four years, respectively, at 40 I was ready to try to get pregnant. Transporting the Bolognese batch served to literally put all my eggs in one basket. Read the full story.
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ We need to save the world’s largest sea star! + Maybe our little corner of the universe is more special than we’ve been led to believe after all. + How the world’s leading anti-anxiety coach overcame her own anxiety. + Here’s how to keep your eyes on the prize in 2025—and beyond!
In the rapidly evolving landscape of digital innovation, staying adaptable isn’t just a strategy—it’s a survival skill. “Everybody has a plan until they get punched in the face,” says Luis Niño, digital manager for technology ventures and innovation at Chevron, quoting Mike Tyson.
Drawing from a career that spans IT, HR, and infrastructure operations across the globe, Niño offers a unique perspective on innovation and how organizational microcultures within Chevron shape how digital transformation evolves.
Centralized functions prioritize efficiency, relying on tools like AI, data analytics, and scalable system architectures. Meanwhile, business units focus on simplicity and effectiveness, deploying robotics and edge computing to meet site-specific needs and ensure safety.
“From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant,” he says.
Central to this transformation is the rise of industrial AI. Unlike consumer applications, industrial AI operates in high-stakes environments where the cost of errors can be severe.
“The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes,” says Niño. “If a machine reacts in ways you don’t expect, people could get hurt, and so there’s an extra level of care that needs to happen and that we need to think about as we deploy these technologies.”
Niño highlights Chevron’s efforts to use AI for predictive maintenance, subsurface analytics, and process automation, noting that “AI sits on top of that foundation of strong data management and robust telecommunications capabilities.” As such, AI is not just a tool but a transformation catalyst redefining how talent is managed, procurement is optimized, and safety is ensured.
Looking ahead, Niño emphasizes the importance of adaptability and collaboration: “Transformation is as much about technology as it is about people.” With initiatives like the Citizen Developer Program and Learn Digital, Chevron is empowering its workforce to bridge the gap between emerging technologies and everyday operations using an iterative mindset.
Niño is also keeping watch over the convergence of technologies like AI, quantum computing, Internet of Things, and robotics, which hold the potential to transform how we produce and manage energy.
“My job is to keep an eye on those developments,” says Niño, “to make sure that we’re managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective.”
This episode of Business Lab is produced in association with Infosys Cobalt.
Full Transcript
Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.
Our topic today is digital transformation, from back office operations to infrastructure in the field like oil rigs, companies continue to look for ways to increase profit, meet sustainability goals, and invest in the latest and greatest technology.
Two words for you: enabling innovation.
My guest is Luis Niño, who is the digital manager of technology ventures, and innovation at Chevron. This podcast is produced in association with Infosys Cobalt.
Welcome, Luis.
Luis Niño: Thank you, Megan. Thank you for having me.
Megan: Thank you so much for joining us. Just to set some context, Luis, you’ve had a really diverse career at Chevron, spanning IT, HR, and infrastructure operations. I wonder, how have those different roles shaped your approach to innovation and digital strategy?
Luis: Thank you for the question. And you’re right, my career has spanned many different areas and geographies in the company. It really feels like I’ve worked for different companies every time I change roles. Like I said, different functions, organizations, locations I’ve had since here in Houston and in Bakersfield, California and in Buenos Aires, Argentina. From an organizational standpoint, I’ve seen central teams international service centers, as you mentioned, field infrastructure and operation organizations in our business units, and I’ve also had corporate function roles.
And the reason why I mentioned that diversity is that each one of those looks at digital transformation and innovation through its own lens. From the priority to scale and streamline in central organizations to the need to optimize and simplify out in business units and what I like to call the periphery, you really learn about the concept first off of microcultures and how different these organizations can be even within our own walls, but also how those come together in organizations like Chevron.
Over time, I would highlight two things. In central organizations, whether that’s functions like IT, HR, or our technical center, we have a central technical center, where we continuously look for efficiencies in scaling, for system architectures that allow for economies of scale. As you can imagine, the name of the game is efficiency. We have also looked to improve employee experience. We want to orchestrate ecosystems of large technology vendors that give us an edge and move the massive organization forward. In areas like this, in central areas like this, I would say that it is data analytics, data science, and artificial intelligence that has become the sort of the fundamental tools to achieve those objectives.
Now, if you allow that pendulum to swing out to the business units and to the periphery, the name of the game is effectiveness and simplicity. The priority for the business units is to find and execute technologies that help us achieve the local objectives and keep our people safe. Especially when we are talking about our manufacturing environments where there’s risk for our folks. In these areas, technologies like robotics, the Internet of Things, and obviously edge computing are currently the enablers of information.
I wouldn’t want to miss the opportunity to say that both of those, let’s call it, areas of the company, rely on the same foundation and that is a foundation of strong data management, of strong network and telecommunications capabilities because those are the veins through which the data flows and everything relies on data.
In my experience, this pendulum also drives our technology priorities and our technology strategy. From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant. If you are deploying something in the center and you suddenly realize that some business unit already has a solution, you cannot just say, let’s shut it down and go with what I said. You have to adapt, you have to understand behavioral change management and you really have to make sure that change and adjustments are your bread and butter.
I don’t know if you know this, Megan, but there’s a popular fight happening this weekend with Mike Tyson and he has a saying, and that is everybody has a plan until they get punched in the face. And what he’s trying to say is you have to be adaptable. The plan is good, but you have to make sure that you remain agile.
Megan: Yeah, absolutely.
Luis: And then I guess the last lesson really quick is about risk management or maybe risk appetite. Each group has its own risk appetite depending on the lens or where they’re sitting, and this may create some conflict between organizations that want to move really, really fast and have urgency and others that want to take a step back and make sure that we’re doing things right at the balance. I think that at the end, I think that’s a question for leadership to make sure that they have a pulse on our ability to change.
Megan: Absolutely, and you’ve mentioned a few different elements and technologies I’d love to dig into a bit more detail on. One of which is artificial intelligence because I know Chevron has been exploring AI for several years now. I wonder if you could tell us about some of the AI use cases it’s working on and what frameworks you’ve developed for effective adoption as well.
Luis: Yeah, absolutely. This is the big one, isn’t it? Everybody’s talking about AI. As you can imagine, the focus in our company is what is now being branded as industrial AI. That’s really a simple term to explain that AI is being applied to industrial and manufacturing settings. And like other AI, and as I mentioned before, the foundation remains data. I want to stress the importance of data here.
One of the differences however is that in the case of industrial AI, data comes from a variety of sources. Some of them are very critical. Some of them are non-critical. Sources like operating technologies, process control networks, and SCADA, all the way to Internet of Things sensors or industrial Internet of Things sensors, and unstructured data like engineering documentation and IT data. These are massive amounts of information coming from different places and also from different security structures. The complexity of industrial AI is considerably higher than what I would call consumer or productivity AI.
Megan: Right.
Luis: The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes. When you’re in an industrial setting, if a machine reacts in ways you don’t expect, people could get hurt, and so there’s an extra level of care that needs to happen and that we need to think about as we deploy these technologies.
AI sits on top of that foundation and it takes different shapes. It can show up as a copilot like the ones that have been popularized recently, or it can show up as agentic AI, which is something that we’re looking at closely now. And agentic AI is just a term to mean that AI can operate autonomously and can use complex reasoning to solve multistep problems in an industrial setting.
So with that in mind, going back to your question, we use both kinds of AI for multiple use cases, including predictive maintenance, subsurface analytics, process automation, and workflow optimization, and also end-user productivity. Each one of those use cases obviously needs specific objectives that the business is looking at in each area of the value chain.
In predictive maintenance, for example, we monitor and we analyze equipment health, we prevent failures, and we allow for preventive maintenance and reduced downtime. The AI helps us understand when machinery needs to be maintained in order to prevent failure instead of just waiting for it to happen. In subsurface analysis, we’re exploring AI to develop better models of hydrocarbon reservoirs. We are exploring AI to forecast geomechanical models and to capture and understand data from fiber optic sensing. Fiber optic sensing is a capability that has proven very valuable to us, and AI is helping us make sense of the wealth of information that comes out of the whole, as we like to say. Of course, we don’t do this alone. We partner with many third-party organizations, with vendors, and with people inside subject matter experts inside of Chevron to move the projects forward.
There are several other areas beyond industrial AI that we are looking at. AI really is a transformation catalyst, and so areas like finance and law and procurement and HR, we’re also doing testing in those corporate areas. I can tell you that I’ve been part of projects in procurement, in HR. When I was in HR we ran a pretty amazing effort in partnership with a third-party company, and what they do is they seek to transform the way we understand talent, and the way they do that is they are trying to provide data-driven frameworks to make talent decisions.
And so they redefine talent by framing data in the form of skills, and as they do this, they help de-bias processes that are usually or can be usually prone to unconscious biases and perspectives. It really is fascinating to think of your talent-based skills and to start decoupling them from what we know since the industrial era began, which is people fit in jobs. Now the question is more the other way around. How can jobs adapt to people’s skills? And then in procurement, AI is basically helping us open the aperture to a wider array of vendors in an automated fashion that makes us better partners. It’s more cost-effective. It’s really helpful.
Before I close here, you did reference frameworks, so the framework of industrial AI versus what I call productivity AI, the understanding of the use cases. All of this sits on top of our responsible AI frameworks. We have set up a central enterprise AI organization and they have really done a great job in developing key areas of responsible AI as well as training and adoption frameworks. This includes how to use AI, how not to use AI, what data we can share with the different GPTs that are available to us.
We are now members of organizations like the Responsible AI Institute. This is an organization that fosters the safe use of AI and trustworthy AI. But our own responsible AI framework, it involves four pillars. The first one is the principles, and this is how we make sure we continue to stay aligned with the values that drive this company, which we call The Chevron Way. It includes assessment, making sure that we evaluate these solutions in proportion to impact and risk. As I mentioned, when you’re talking about industrial processes, people’s lives are at stake. And so we take a very close look at what we are putting out there and how we ensure that it keeps our people safe. It includes education, I mentioned training our people to augment their capabilities and reinforcing responsible principles, and the last of the four is governance oversight and accountability through control structures that we are putting in place.
Megan: Fantastic. Thank you so much for those really fascinating specific examples as well. It’s great to hear about. And digital transformation, which you did touch on briefly, has become critical of course to enable business growth and innovation. I wonder what has Chevron’s digital transformation looked like and how has the shift affected overall operations and the way employees engage with technology as well?
Luis: Yeah, yeah. That’s a really good question. The term digital transformation is interpreted in many different ways. For me, it really is about leveraging technology to drive business results and to drive business transformation. We usually tend to specify emerging technology as the catalyst for transformation. I think that is okay, but I also think that there are ways that you can drive digital transformation with technology that’s not necessarily emerging but is being optimized, and so under this umbrella, we include everything from our Citizen Developer Program to complex industry partnerships that help us maximize the value of data.
The Citizen Developer Program has been very successful in helping bridge the gap between our technical software engineer and software development practices and people who are out there doing the work, getting familiar, and demystifying the way to build solutions.
I do believe that transformation is as much about technology as it is about people. And so to go back to the responsible AI framework, we are actively training and upskilling the workforce. We created a program called Learn Digital that helps employees embrace the technologies. I mentioned the concept of demystifying. It’s really important that people don’t fall into the trap of getting scared by the potential of the technology or the fact that it is new and we help them and we give them the tools to bridge the change management gap so they can get to use them and get the most out of them.
At a high level, our transformation has followed the cyclical nature that pretty much any transformation does. We have identified the data foundations that we need to have. We have understood the impact of the processes that we are trying to digitize. We organize that information, then we streamline and automate processes, we learn, and now machines learn and then we do it all over again. And so this cyclical mindset, this iterative mindset has really taken hold in our culture and it has made us a little bit better at accepting the technologies that are driving the change.
Megan: And to look at one of those technologies in a bit more detail, cloud computing has revolutionized infrastructure across industries. But there’s also a pendulum ship now toward hybrid and edge computing models. How is Chevron balancing cloud, hybrid, and edge strategies for optimal performance as well?
Luis: Yeah, that’s a great question and I think you could argue that was the genesis of the digital transformation effort. It’s been a journey for us and it’s a journey that I think we’re not the only ones that may have started it as a cost savings and storage play, but then we got to this ever-increasing need for multiple things like scaling compute power to support large language models and maximize how we run complex models. There’s an increasing need to store vast amounts of data for training and inference models while we improve data management and, while we predict future needs.
There’s a need for the opportunity to eliminate hardware constraints. One of the promises of cloud was that you would be able to ramp up and down depending on your compute needs as projects demanded. And that hasn’t stopped, that has only increased. And then there’s a need to be able to do this at a global level. For a company like ours that is distributed across the globe, we want to do this everywhere while actively managing those resources without the weight of the infrastructure that we used to carry on our books. Cloud has really helped us change the way we think about the digital assets that we have.
It’s important also that it has created this symbiotic need to grow between AI and the cloud. So you don’t have the AI without the cloud, but now you don’t have the cloud without AI. In reality, we work on balancing the benefits of cloud and hybrid and edge computing, and we keep operational efficiency as our North Star. We have key partnerships in cloud, that’s something that I want to make sure I talk about. Microsoft is probably the most strategic of our partnerships because they’ve helped us set our foundation for cloud. But we also think of the convenience of hybrid through the lens of leveraging a convenient, scalable public cloud and a very secure private cloud that helps us meet our operational and safety needs.
Edge computing fills the gap or the need for low latency and real-time data processing, which are critical constraints for decision-making in most of the locations where we operate. You can think of an offshore rig, a refinery, an oil rig out in the field, and maybe even not-so-remote areas like here in our corporate offices. Putting that compute power close to the data source is critical. So we work and we partner with vendors to enable lighter compute that we can set at the edge and, I mentioned the foundation earlier, faster communication protocols at the edge that also solve the need for speed.
But it is important to remember that you don’t want to think about edge computing and cloud as separate things. Cloud supports edge by providing centralized management by providing advanced analytics among others. You can train models in the cloud and then deploy them to edge devices, keeping real-time priorities in mind. I would say that edge computing also supports our cybersecurity strategy because it allows us to control and secure sensitive environments and information while we embed machine learning and AI capabilities out there.
So I have mentioned use cases like predictive maintenance and safety, those are good examples of areas where we want to make sure our cybersecurity strategy is front and center. When I was talking about my experience I talked about the center and the edge. Our strategy to balance that pendulum relies on flexibility and on effective asset management. And so making sure that our cloud reflects those strategic realities gives us a good footing to achieve our corporate objectives.
Megan: As you say, safety is a top priority. How do technologies like the Internet of Things and AI help enhance safety protocols specifically too, especially in the context of emissions tracking and leak detection?
Luis: Yeah, thank you for the question. Safety is the most important thing that we think and talk about here at Chevron. There is nothing more important than ensuring that our people are safe and healthy, so I would break safety down into two. Before I jump to emissions tracking and leak detection, I just want to make a quick point on personal safety and how we leverage IoT and AI to that end.
We use sensing capabilities that help us keep workers out of harm’s way, and so things like computer vision to identify and alert people who are coming into safety areas. We also use computer vision, for example, to identify PPE requirements—personal protective equipment requirements—and so if there are areas that require a certain type of clothing, a certain type of identification, or a hard hat, we are using technologies that can help us make sure people have that before they go into a particular area.
We’re also using wearables. Wearables help us in one of the use cases is they help us track exhaustion and dehydration in locations where that creates inherent risk, and so locations that are very hot, whether it’s because of the weather or because they are enclosed, we can use wearables that tell us how fast the person’s getting dehydrated, what are the levels of liquid or sodium that they need to make sure that they’re safe or if they need to take a break. We have those capabilities now.
Going back to emissions tracking and leak detection, I think it’s actually the combination of IoT and AI that can transform how we prevent and react to those. In this case, we also deploy sensing capabilities. We use things like computer vision, like infrared capabilities, and we use others that deliver data to the AI models, which then alert and enable rapid response.
The way I would explain how we use IoT and AI for safety, whether it’s personnel safety or emissions tracking and leak detection, is to think about sensors as the extension of human ability to sense. In some cases, you could argue it’s super abilities. And so if you think of sight normally you would’ve had supervisors or people out there that would be looking at the field and identifying issues. Well, now we can use computer vision with traditional RGB vision, we can use them with infrared, we can use multi-angle to identify patterns, and have AI tell us what’s going on.
If you keep thinking about the human senses, that’s sight, but you can also use sound through ultrasonic sensors or microphone sensors. You can use touch through vibration recognition and heat recognition. And even more recently, this is something that we are testing more recently, you can use smell. There are companies that are starting to digitize smell. Pretty exciting, also a little bit crazy. But it is happening. And so these are all tools that any human would use to identify risk. Well, so now we can do it as an extension of our human abilities to do so. This way we can react much faster and better to the anomalies.
A specific example with methane. We have a simple goal with methane, we want to keep methane in the pipe. Once it’s out, it’s really hard or almost impossible to take it back. Over the last six to seven years, we have reduced our methane intensity by over 60% and we’re leveraging technology to achieve that. We have deployed a methane detection program. We have trialed over 10 to 15 advanced methane detection technologies.
A technology that I have been looking at recently is called Aquanta Vision. This is a company supported by an incubator program we have called Chevron Studio. We did this in partnership with the National Renewable Energy Laboratory, and what they do is they leverage optical gas imaging to detect methane effectively and to allow us to prevent it from escaping the pipe. So that’s just an example of the technologies that we’re leveraging in this space.
Megan: Wow, that’s fascinating stuff. And on emissions as well, Chevron has made significant investments in new energy technologies like hydrogen, carbon capture, and renewables. How do these technologies fit into Chevron’s broader goal of reducing its carbon footprint?
Luis: This is obviously a fascinating space for us, one that is ever-changing. It is honestly not my area of expertise. But what I can say is we truly believe we can achieve high returns and lower carbon, and that’s something that we communicate broadly. A few years ago, I believe it was 2021, we established our Chevron New Energies company and they actively explore lower carbon alternatives including hydrogen, renewables, and carbon capture offsets.
My area, the digital area, and the convergence between digital technologies and the technical sciences will enable the techno-commercial viability of those business lines. Thinking about carbon capture, is something that we’ve done for a long time. We have decades of experience in carbon capture technologies across the world.
One of our larger projects, the Gorgon Project in Australia, I think they’ve captured something between 5 and 10 million tons of CO2 emissions in the past few years, and so we have good expertise in that space. But we also actively partner in carbon capture. We have joined hubs of carbon capture here in Houston, for example, where we investing in companies like there’s a company called Carbon Clean, a company called Carbon Engineering, and one called Svante. I’m familiar with these names because the corporate VC team is close to me. These companies provide technologies for direct air capture. They provide solutions for hard-to-abate industries. And so we want to keep an eye on these emerging capabilities and make use of them to continuously lower our carbon footprint.
There are two areas here that I would like to talk about. Hydrogen first. This is another area that we’re familiar with. Our plan is to build on our existing assets and capabilities to deliver a large-scale hydrogen business. Since 2005, I think we’ve been doing retail hydrogen, and we also have several partnerships there. In renewables, we are creating a range of fuels for different transportation types. We use diesel, bio-based diesel, we use renewable natural gas, we use sustainable aviation fuel. Yeah, so these are all areas of importance to us. They’re emerging business lines that are young in comparison to the rest of our company. We’ve been a company for 140 years plus, and this started in 2021, so you can imagine how steep that learning curve is.
I mentioned how we leverage our corporate venture capital team to learn and to keep an eye out on what are these emerging trends and technologies that we want to learn about. They leverage two things. They leverage a core fund, which is focused on areas that can seek innovation for our core business for the title. And we have a separate future energy fund that explores areas that are emerging. Not only do they invest in places like hydrogen, carbon capture, and renewables, but they also may invest in other areas like wind and geothermal and nuclear capability. So we constantly keep our eyes open for these emerging technologies.
Megan: I see. And I wonder if you could share a bit more actually about Chevron’s role in driving sustainable business innovation. I’m thinking of initiatives like converting used cooking oil into biodiesel, for example. I wonder how those contribute to that overall goal of creating a circular economy.
Luis: Yeah, this is fascinating and I was so happy to learn a little bit more about this year when I had the chance to visit our offices in Iowa. I’ll get into that in a second. But happy to talk about this, again with the caveat that it’s not my area of expertise.
Megan: Of course.
Luis: In the case of biodiesel, we acquired a company called REG in 2022. They were one of the founders of the renewable fuels industry, and they honestly do incredible work to create energy through a process, I forget the name of the process to be honest. But at the most basic level what they do is they prepare feedstocks that come from different types of biomass, you mentioned cooking oils, there’s also soybeans, there’s animal fats. And through various chemical reactions, what they do is convert components of the feedstock into biodiesel and glycerin. After that process, what they do is they separate un-reactive methanol, which is recovered and recycled into the process, and the biodiesel goes through a final processing to make sure that it meets the standards necessary to be commercialized.
What REG has done is it has boosted our knowledge as a broader organization on how to do this better. They continuously look for bio-feedstocks that can help us deliver new types of energy. I had mentioned bio-based diesel. One of the areas that we’re very focused on right now is sustainable aviation fuel. I find that fascinating. The reason why this is working and the reason why this is exciting is because they brought this great expertise and capability into Chevron. And in turn, as a larger organization, we’re able to leverage our manufacturing and distribution capabilities to continue to provide that value to our customers.
I mentioned that I learned a little bit more about this this year. I was lucky earlier in the year I was able to visit our REG offices in Ames, Iowa. That’s where they’re located. And I will tell you that the passion and commitment that those people have for the work that they do was incredibly energizing. These are folks who have helped us believe, really, that our promise of lower carbon is attainable.
Megan: Wow. Sounds like there’s some fascinating work going on. Which brings me to my final question. Which is sort of looking ahead, what emerging technologies are you most excited about and how do you see them impacting both Chevron’s core business and the energy sector as a whole as well?
Luis: Yeah, that’s a great question. I have no doubt that the energy business is changing and will continue to change only faster, both our core business as well as the future energy, or the way it’s going to look in the future. Honestly, in my line of work, I come across exciting technology every day. The obvious answers are AI and industrial AI. These are things that are already changing the way we live without a doubt. You can see it in people’s productivity. You can see it in how we optimize and transform workflows. AI is changing everything. I am actually very, very interested in IoT, in the Internet of Things, and robotics, the ability to protect humans in high-risk environments, like I mentioned, is critical to us, the opportunity to prevent high-risk events and predict when they’re likely to happen.
This is pretty massive, both for our productivity objectives as well as for our lower carbon objectives. If we can predict when we are at risk of particular events, we could avoid them altogether. As I mentioned before, this ubiquitous ability to sense our surroundings is a capability that our industry and I’m going to say humankind, is only beginning to explore.
There’s another area that I didn’t talk too much about, which I think is coming, and that is quantum computing. Quantum computing promises to change the way we think of compute power and it will unlock our ability to simulate chemistry, to simulate molecular dynamics in ways we have not been able to do before. We’re working really hard in this space. When I say molecular dynamics, think of the way that we produce energy today. It is all about the molecule and understanding the interactions between hydrocarbon molecules and the environment. The ability to do that in multi-variable systems is something that quantum, we believe, can provide an edge on, and so we’re working really hard in this space.
Yeah, there are so many, and having talked about all of them, AI, IoT, robotics, quantum, the most interesting thing to me is the convergence of all of them. If you think about the opportunity to leverage robotics, but also do it as the machines continue to control limited processes and understand what it is they need to do in a preventive and predictive way, this is such an incredible potential to transform our lives, to make an impact in the world for the better. We see that potential.
My job is to keep an eye on those developments, to make sure that we’re managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective.
Megan: Absolutely. Such an important point to finish on. And unfortunately, that is all the time we have for today, but what a fascinating conversation. Thank you so much for joining us on the Business Lab, Luis.
Luis: Great to talk to you.
Megan: Thank you so much. That was Luis Niño, who is the digital manager of technology ventures and innovation at Chevron, who I spoke with today from Brighton, England.
That’s it for this episode of Business Lab. I’m Megan Tatum, I’m your host and a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.
This show is available wherever you get your podcasts, and if you enjoyed this episode, we really hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thank you so much for listening.
Meta has released a new AI model that can translate speech from 101 different languages. It represents a step toward real-time, simultaneous interpretation, where words are translated as soon as they come out of someone’s mouth.
Typically, translation models for speech use a multistep approach. First they translate speech into text. Then they translate that text into text in another language. Finally, that translated text is turned into speech in the new language. This method can be inefficient, and at each step, errors and mistranslations can creep in. But Meta’s new model, called SeamlessM4T, enables more direct translation from speech in one language to speech in another. The model is described in a paper published today in Nature.
Seamless can translate text with 23% more accuracy than the top existing models. And although another model, Google’s AudioPaLM, can technically translate more languages—113 of them, versus 101 for Seamless—it can translate them only into English. SeamlessM4T can translate into 36 other languages.
The key is a process called parallel data mining, which finds instances when the sound in a video or audio matches a subtitle in another language from crawled web data. The model learned to associate those sounds in one language with the matching pieces of text in another. This opened up a whole new trove of examples of translations for their model.
“Meta has done a great job having a breadth of different things they support, like text-to-speech, speech-to-text, even automatic speech recognition,” says Chetan Jaiswal, a professor of computer science at Quinnipiac University, who was not involved in the research. “The mere number of languages they are supporting is a tremendous achievement.”
Human translators are still a vital part of the translation process, the researchers say in the paper, because they can grapple with diverse cultural contexts and make sure the same meaning is conveyed from one language into another. This step is important, says Lynne Bowker, Canada Research Chair in Translation, Technologies and Society at Université Laval in Quebec, who didn’t work on Seamless. “Languages are a reflection of cultures, and cultures have their own ways of knowing things,” she says.
When it comes to applications like medicine or law, machine translations need to be thoroughly checked by a human, she says. If not, misunderstandings can result. For example, when Google Translate was used to translate public health information about the covid-19 vaccine from the Virginia Department of Health in January 2021, it translated “not mandatory” in English into “not necessary” in Spanish, changing the whole meaning of the message.
AI models have much more examples to train on in some languages than others. This means current speech-to-speech models may be able to translate a language like Greek into English, where there may be many examples, but cannot translate from Swahili to Greek. The team behind Seamless aimed to solve this problem by pre-training the model on millions of hours of spoken audio in different languages. This pre-training allowed it to recognize general patterns in language, making it easier to process less widely spoken languages because it already had some baseline for what spoken language is supposed to sound like.
The system is open-source, which the researchers hope will encourage others to build upon its current capabilities. But some are skeptical of how useful it may be compared with available alternatives. “Google’s translation model is not as open-source as Seamless, but it’s way more responsive and fast, and it doesn’t cost anything as an academic,” says Jaiswal.
The most exciting thing about Meta’s system is that it points to the possibility of instant interpretation across languages in the not-too-distant future—like the Babel fish in Douglas Adams’ cult novel The Hitchhiker’s Guide to the Galaxy. SeamlessM4T is faster than existing models but still not instant. That said, Meta claims to have a newer version of Seamless that’s as fast as human interpreters.
“While having this kind of delayed translation is okay and useful, I think simultaneous translation will be even more useful,” says Kenny Zhu, director of the Arlington Computational Linguistics Lab at the University of Texas at Arlington, who is not affiliated with the new research.