Meta AI Introduces AI-Generated Photos to All Platforms via @sejournal, @martinibuster

Meta just released multiple updates to Meta AI which brings advanced image generation and editing capabilities directly to Facebook, Instagram and WhatsApp feeds, plus availability in more countries and languages.

New Meta AI Creative Tools

Meta AI is bringing AI generated and AI Edited photography that can be generated at the moment a user is making a post or sending a message with a new tool called Imagine Me.

Imagine Me is a prompt that can be used to transform an uploaded image that can be shared. This new feature is first rolling out as a beta in the United States.

Meta explains:

“Imagine yourself creates images based on a photo of you and a prompt like ‘Imagine me surfing’ or ‘Imagine me on a beach vacation’ using our new state-of-the-art personalization model. Simply type “Imagine me” in your Meta AI chat to get started, and then you can add a prompt like “Imagine me as royalty” or “Imagine me in a surrealist painting.” From there, you can share the images with friends and family, giving you the perfect response or funny sidebar to entertain your group chat.”

Screenshot of a photograph of a woman that's altered using Meta AI to show her having tea with an ostrich

New Editing Features

Meta products like Facebook, Messenger, WhatsApp and Instagram now have advanced editing capabilities that allow users to add or remove objects from images, to change them in virtually any manner, such as their example of turning a cat in an image into a dog. A new Edit With AI button is forthcoming in a month that will unlock even more AI editing power.

Adding AI generated images to Facebook, Instagram, Messenger and WhatsApp within feed, posts, stories, comments and messages is rolling out this week in English and coming later to other languages.

Screenshot of a Facebook user adding an AI generated image into their post

Meta AI In More Countries And Languages

Meta AI is now available in seven additional countries, bringing the total countries to to 22. It is also available in seven more languages.

List of Seven Additional Countries:

  1. Argentina
  2. Cameroon
  3. Chile
  4. Colombia
  5. Ecuador
  6. Mexico
  7. Peru

Meta AI is now also available in the following seven additional languages:

  1. French
  2. German
  3. Hindi
  4. Hindi-Romanized Script
  5. Italian
  6. Portuguese
  7. Spanish

Advanced Math And Coding

Meta AI is making their most advanced model, Llama 405B, available for users to take advantage of its advanced reasoning abilities that can answer complex answers and excells at math and coding.

Meta AI writes:

“You can get help on your math homework with step-by-step explanations and feedback, write code faster with debugging support and optimization suggestions, and master complex technical and scientific concepts with expert instruction.”

Read the official announcement:

Meta AI Is Now Multilingual, More Creative and Smarter

Featured Image by Shutterstock/QubixStudio

WordPress Releases 6.6.1 To Fix Fatal Errors In 6.6 via @sejournal, @martinibuster

A week after releasing the troubled version 6.6, WordPress has released another version that fixes seven major issues including two that caused fatal errors (website crashes), another issue that caused a security plugins to issue false warnings plus several more that created unwanted UI changes.

Fatal Errors In WordPress 6.6

The one issue that got a lot of attention on social media is one that affected users of certain page builders and themes like Divi. The issue, while relatively minor, dramatically changed the look of websites by introducing underlines beneath all links. Some on social media joked that this was a fix and not a bug. While it’s a generally a good user practice to have underlines beneath links, underline aren’t necessary in all links, like in the top-level navigation.

A post on the WordPress.org support forums was the first noticeable indications in social media that something was wrong with WordPress 6.6:

“Updating to 6.6 caused all links to be immediately underlined on a staging divi themed site.”

They outlined a workaround that seemed to alleviate the issue but they were unsure about what the root cause of the problem was.

They then posted:

“But does anyone think this means I still have something wrong with this staging site, or is this a WordPress version update issue, or more likely a divi theme issue I should speak to them about? Also, if anyone is even familiar with expected Rparen error…that I’m just riding with at the moment, that might help. Thanks.”

Divi issued an emergency fix for that their users could apply even though the issue was on the WordPress side, not on the Divi side.

WordPress later acknowledged the bug and reported that they will be issuing a fix in version 6.6.1.

The Other Issues Fixed In 6.6.1

Fatal Error

is_utf8_charset() undefined when called by code in compat.php (causes a fatal error).

A section of code in 6.6 caused a critical issue (fatal error) that prevents the website from functioning normally. It was noticed by users of WP Super Cache. WP Super Cache developed a temporary workaround that consisted of completely disabling the website caching.

Their notation in GitHub stated:

“Disabling the cache removes the error but is far from ideal.”

Php Fatal Error

“PHP Fatal error: Uncaught Error: Object of class WP_Comment could not be converted to string.”

There was a problem with a part of the WordPress code where one part was trying to get the name of the person who left a comment on a post. This part of the program was supposed to receive a number (the comment ID) but sometimes it was getting a more complex piece of information instead (a WP_Comment object) which then triggered a PHP “fatal error.” An analogy might be like trying to fit a square peg into a round hole, it doesn’t work.

This issue was discovered by someone who was using the Divi website builder.

The other bugs that are fixed didn’t cause websites to crash but they were inconvenient:

Read the full details of WordPress 6.6.1 maintenance release:

WordPress 6.6.1 Maintenance Release

Featured Image by Shutterstock/HBRH

Google Shares Tips To Improve SEO Through Internal Links via @sejournal, @MattGSouthern

In a new installment of its “SEO Made Easy” video series, Google provides three simple guidelines for utilizing internal linking to improve SEO.

The video, presented by Google’s Martin Splitt, offers valuable insights for improving site structure and user experience.

Strategic internal linking highlights your most valuable content, ensuring users and search engines can identify them quickly.

Additionally, internal linking can help search engines understand the relationships between pages, potentially leading to better rankings.

3 Tips For Internal Linking

Splitt emphasized three main points regarding the effective use of internal links:

  1. User Navigation: Internal links guide users through a website, helping them find related content and understand the site’s structure.
  2. Search Engine Crawling: Google’s web crawler, Googlebot, uses internal links to discover new pages and understand the relationships between different pages on a site.
  3. HTML Best Practices: Properly using HTML elements, particularly the < a> tag with an href attribute, is essential for creating effective links.

The Importance Of Meaningful Anchor Text

One of Google’s key recommendations is to use descriptive, meaningful anchor text for links.

Splitt demonstrated how clear anchor text improves user experience by allowing visitors to quickly scan a page and understand where each link will lead them.

He stated:

“Users and Bots alike prefer meaningful anchor text. Here on the left you see what that looks like each link has meaningful words as anchor text and you can easily spot what the link will take you to.”

See the examples he’s referring to in the image below:

Screenshot from: YouTube.com/GoogleSearchCentral, July 2024.

Splitt continues:

“On the right you see a page that doesn’t use meaningful anchor text and that isn’t a good user experience especially when you try to quickly scan the page and find the right link to use.”

Balancing Link Quantity

While internal linking is vital, Splitt cautioned against overdoing it.

He advises applying critical judgment when adding links and creating logical connections between related content without overwhelming the user or diluting the page’s focus.

Technical Considerations For Links

The video also touched on the technical aspects of link implementation.

Splitt discouraged using non-standard elements like spans, divs, or buttons to create links, saying if an element behaves like a link, it should be coded as one using the proper HTML structure.

Screenshot from: YouTube.com/GoogleSearchCentral, July 2024.

In Summary

These are the key takeaways from Google’s video on internal linking:

  • Internal linking is a fundamental aspect of SEO and user experience.
  • Focus on creating meaningful, descriptive anchor text for links.
  • Use internal links strategically to guide users and search engines through your site.
  • Balance the number of links to avoid overwhelming users or diluting page focus.
  • Stick to proper HTML structure when implementing links.

See the full video below:


Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, July 2024. 

System Builders – How AI Changes The Work Of SEO via @sejournal, @Kevin_Indig

AI is terraforming tech. The content and SEO ecosystem is undergoing a massive structural change.

Human-written content gains value faster for LLM training than for end consumers as the pure profit licensing deals between LLM developers and publishers show.

Publishers struggle to survive from digital subscriptions but get millions that go straight to their bottom line for providing training data.

Content platforms, social networks, SaaS companies and consumer apps coat their products with AI. A few examples:

  • Spotify DJ (AI-generated playlist).
  • AI Overview (AI answers in Google Search).
  • Instagram AI personas (celebrity AI chatbots).
  • Ebay’s magical listing (turn a photo into a listing).
  • Redfin Redesign (try interior designs on real house pictures).
Google searches for chat gptImage Credit: Kevin Indig

The quality of machine-generated content (MGC) challenges human-generated content (HGC). I ran an experiment with my Twitter and LinkedIn followers: I asked them to choose which of two articles was written by a human and which by a machine – and they had to explain their answer.

Only a handful of people figured out that AI wrote both pieces. I intentionally framed the question in a leading way to see if people would challenge the setting or believe that one piece was written by a human if told so.

  • Not an isolated experiment: A survey of 1,900 Americans found that 63.5% of people can’t distinguish between AI content and human content.1
  • People seek help: Google search demand for [ai checker] has reached 100,000 in May 2024 (Glimpse).
  • Dark side: scammers use MGC to make money, as 77% of AI scam victims lost money.2
Search demand for AI checkerImage Credit: Kevin Indig

The quality level of LLMs pushes SEO work towards automating workflows and learning with AI, while writers will take content from good to great instead of zero to one.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

How AI Changes The Work Of SEOImage Credit: Lyna ™

System Builders

Clients, podcasters and panel hosts often ask me what skills SEOs need to build for the AI future. For a long time, my answer was to learn, stay open-minded and gain as much practical experience with AI as possible.

Now, my answer is SEOs should learn how to build AI agents and workflows that automate tasks. AI changes the way search works but also the way SEOs work.

AI + No-code Allows SEOs To Automate Workflows

A few examples:

1/ Cannibalization

  • Old world: SEOs download search console data and create pivot tables to spot keyword cannibalization.
  • New world: SEOs build an AI workflow that sends alters, identifies true keyword cannibalization, makes content suggestions to fix the problem, and monitors the improvement.

2/ Site Crawling

  • Old world: SEOs crawl websites to find inefficiencies in internal linking, status code errors, duplicate content, etc.
  • New world: SEOs build an AI agent that regularly crawls the site and automatically suggests new internal links that are shipped after human approval, fixes broken canonical tags and excludes soft 404 errors in the robots.txt.

3/ Content Creation

  • Old world: SEOs do keyword research and write content briefs. Writers create the content.
  • New world: SEOs automate keyword research with AI and create hundreds of relevant articles as a foundation for writers to build on.

All of this is already possible today with AI workflow tools like AirOps or Apify, which chain agents and LLMs together to scrape, analyze, transform data or create content.

Moving forward, we’ll spend much more time building automated systems instead of wasting time on point analyses and catalogs of recommendations. The SEO work will be defining logic, setting rules, prompting and coding.

building automated systems Building workflows with AirOps (Image Credit: Kevin Indig)

You Can Learn (Almost) Anything With AI

I never made the time to really learn Python or R, but with the help of Chat GPT and Gemini in Colab, I can write any script with natural language prompts.

When the script doesn’t work, I can paste a screenshot into Chat GPT and describe the issue to get a solution. AI helps with Regex, Google Sheets/Excel, R, Python, etc. Nothing is off-limits.

Being able to write scripts can solve problems like data analysis, a/b testing and using APIs. As an SEO, I’m no longer dependent on engineers, data scientists or writers to perform certain tasks. I can act faster and on my own account.

I’m not the only one to figure this out. People are learning to code, write and many other skills with AI. We can learn to build AI workflows by asking AI to teach us.

Search demand for coding with AI is explodingImage Credit: Kevin Indig
Search demand for write with AI is explodingImage Credit: Kevin Indig
Search demand for learn with AI is explodingImage Credit: Kevin Indig

When you can learn almost anything, the only limit is time.

The Work Of Writers Changes

Against common belief, writers won’t be crossed out of this equation but will play the critical role of editing, directing and curating.

In any automated process, humans QA the output. Think of car assembling lines. Even though AI content leaps in quality, spot checks reduce the risk of errors. Caught issues, such as wrong facts, weird phrasing or off-brand wording, will be critical feedback to fine-tune models to improve their output.

Instead of leg work like writing drafts, writers will bring AI content from good to great. In the concept of information gain, writers will spend most of their time making a piece outstanding.

The rising quality work spans from blog content to programmatic content, where writers will add curated content when searches have a desire for human experience, such as in travel.

A mini guide to Los AngelesTripadvisor’s attraction pages feature human-curated sections. (Image Credit: Kevin Indig)

Unfair Advantage

As often with new technology, a few first-mover people and companies get exponential value until the rest catch up. My worry is that a few fast-moving companies will grab massive land with AI.

And yet, this jump in progress will allow newcomers to challenge incumbents and get a fair chance to compete on the field.

AI might be a bigger game changer for SEOs than for Google. The raw power of AI might help us overcome challenges from AI Overviews and machine learning-driven algorithm updates.

But the biggest win might be that SEOs can finally make something instead of delivering recommendations. The whole value contribution of SEOs changes because my output can drive results faster.

Survey: ChatGPT and AI Content – Can people tell the difference?

Artificial Intelligence Voice Scams on the Rise with 1 in 4 Adults Impacted


Featured Image: Paulo Bobita/Search Engine Journal

Looking Ahead To The Future – SGE, Gemini…And AGI? via @sejournal, @marie_haynes

This extract is from SEO in the Gemini Era by Marie Haynes ©2024 and reproduced with permission from Marie Haynes Consulting Inc.

There are changes on the horizon that all SEOs should be aware of that are also closely related to AI. Search Generative Experience (SGE), which is now renamed to AI Overviews, and Gemini are already changing how people get their information.

And, if we are to believe Google DeepMind’s creator Demis Hassabis, within the next decade, Google will achieve their goal – building artificial general intelligence, AGI.

I do believe Demis. And I am excited.

Let’s talk first about the immediate changes to the search landscape that every website owner should be aware of.

This is a general overview as many of these features and what’s important to know about them are changing rapidly. Much of this section will likely be dated by the time you read this book!

Google’s Search Generative Experience/AI Overviews

Google’s CEO, Sundar Pichai has said that SGE is the future of Search.

SGE is the future of SearchImage from author, July 2024

SGE started as an experiment that could be opted into at labs.google.com. This is now in Search in some countries, with a new name, AI Overviews. They can have several different components, or a combination of them, including:

An AI generated answer that appears to be stitched together from multiple websites like a large featured snippet.

Carousels Of Websites

I have been calling these “helpful content carousels” because they generally contain the type of content we have been talking about throughout this book. I have since heard them referred to as “link cards.”

(I like helpful content carousels better!)

Carousels of websites. Image from author, July 2024

These carousels are appearing in regular searches and also in Google Assistant responses. You may see similar carousels peppered throughout search labeled as “Perspectives,” “Forum Discussions,” and other labels.

similar carousels peppered throughout search labeled as “Perspectives”, “Forum Discussions”, and other labels. Image from author, July 2024
carousels peppered throughout search labeled as “Perspectives”, “Forum Discussions”, and other labels. Image from author, July 2024

We will see more evolution of these features over time. Google is learning with each search whether people are finding AI Overviews helpful. They will continually learn how to improve.

Here’s more reading:

Google Gemini (Formerly Bard)

Google’s naming of products is confusing! You may have noticed that throughout this book I refer sometimes to Bard, and sometimes to Gemini. This is because in early 2024, Bard was renamed to Gemini.

Gemini is also the name of the language model behind the system. Gemini essentially is everything that is AI at Google.

Gemini the chatbot is not a search engine. But people will likely use it as one. It is a way to get information, and the more it improves, the more helpful it becomes.

Gemini will become Google Assistant, and be available at a quick touch or voice command on most phones that can pop up over top of other apps.

It is continually improving via a process called reinforcement learning. If you used Bard a few times and gave up because you found it unhelpful, I’d encourage you to try out Gemini.

Ideally it’s worth signing up for the most advanced version of Gemini publicly available, Gemini Advanced. As I write this, Google is offering a two month trial.

In the short time since Bard upgraded to Gemini I have been thoroughly impressed with its improvement. It feels like it improves daily.

In February of 2024 Google quietly announced an upgrade to Gemini called Gemini 1.5 that gives it an entirely new architecture based on something called a Mixture of Experts model.

This type of model is not new, but DeepMind says that the type of MoE model they use for Gemini is a brand new version of MoE. The changes made to Gemini made it significantly more efficient, accurate and better able to understand the data it trains on.

Gemini 1.5 greatly improves Google’s AI capabilities across the board. And creates a framework for them to continue to improve at a fast rate.

Here are some helpful links to learn more about Gemini:

If you read one thing on Gemini, make it this:

Google’s blog post called “Assistant with Bard: A step toward a more personal assistant.

I thoroughly believe Gemini is the way of the future. It might not seem like it if you have used it a few times and run into a few hallucinations and made up answers.

After reading all of the above, hopefully you will see what I see and that is that Gemini is poised to be the future of how we interact with information online.

A lot will likely change in the world as this happens.

How? It’s hard to predict. I think that many people will be affected like I have been. The more I use LLM’s, the more I learn. The more I learn, the more knowledge I have. Then I can take those ideas and brainstorm them with Gemini or ChatGPT.

This has led to me being able to understand a lot about how Search works and to develop my skills in almost everything I do in my work at a faster pace. The more I do this, the more I learn how to learn and also to get the most out of the LLM tools that are available to me.

People who are good at their trades will get better with the help of AI. Those who know how to use AI will start to develop significant advantages over those who do not.

Imagine if you were living in 2024 and did not use a phone. You could certainly live, but you would be at a disadvantage compared to those who do use technology.

I believe we may face a dangerous divide in our civilization as this happens. I am beyond excited for those who are at the cutting edge of learning how we can improve the world with AI. But what will happen to those who decide to avoid its use at all costs?

Fortunately, Google’s CEO has said that this transition in how we search will happen over the next decade. We hopefully have some time to adjust.

Business Integration

In Google’s earnings calls they have mentioned that one of Gemini’s strengths is business integration. We haven’t seen it yet. But eventually, we should see it get easier and easier for businesses to not only integrate Google’s AI capabilities, but also make money from it.

Pay attention to how AI is changing Google Ads as well. I have not written about Ads in this book, but can see all sorts of future opportunities here.

Let me share what I think could happen. Imagine a searcher is looking for information on a recent traffic drop. They converse with Gemini, who tells them the world’s general advice about what to consider and then recommends perhaps some websites to read.

I could see Google offering paid positions that say, “Talk to Marie Haynes’ AI Assistant.” It’s an Ad that then connects the searcher with a chatbot on my site.

This chatbot would be grounded with my recent writings. I would be incentivized to continue to create great, helpful content because this is what will make my chatbot useful.

It’s possible I could charge money for this chatbot. Or, perhaps I might choose to make it free and where appropriate, the chatbot would recommend my resources and services.

In that case, I could see Google inserting my Chatbot right into the search results.

When businesses start to make real money from Google’s AI, we will see some more acceleration!

Notes

[1] Demis Hassabis on Chatbots to AGI | EP 71 Hard Fork Podcast. https://www.youtube.com/watch?v=nwUARJeeplA

[2] Google’s Vision for Search in 2023 and Beyond – Analysis of the 2023 Earnings Call. Marie Haynes. https://www.mariehaynes.com/googles-vision-for-search-in-2023-and-beyond-analysis-of-the-q2-2023-earnings-call/

[3] Gemini Advanced. https://gemini.google.com/advanced


To read the full book, SEJ readers have an exclusive 20% discount for Marie’s book, workbook and course bundle. The discount will be applied automatically by following these links:


More resources:


Featured Image: ArtemisDiana/Shutterstock

Google Cautions: Exponential Content Growth Causes Re-Evaluation via @sejournal, @martinibuster

Google’s John Mueller answered a question about the impact of increasing a website’s size by ten times its original size. Mueller’s answer should give pause to anyone considering making their site dramatically larger, as it will cause Google to see it as a brand new website and trigger a re-evaluation.

Impact Of Making A Site Bigger

One of the reasons for a site migration is joining two websites into one website, which can cause a site to become even larger. Another reason for an increase in size is the addition of a massive amount of new products.

This is the question that was asked in the SEO Office Hours podcast:

“What’s the impact of a huge expansion of our product portfolio on SEO Performance, for example going from 10,000 to products to 100,000?”

It must be pointed out that the question is about a site growing ten times larger.

This is is Mueller’s answer:

“I don’t think you have to look for exotic explanations. If you grow a website significantly, in this case, by a factor of 10, then your website will overall be very different. By definition, the old website would only be 10% of the new website. This means it’s only logical to expect search engines to re-evaluate how they show your website. It’s basically a new website after all.

It’s good to be strategic about changes like this, I wouldn’t look at it as being primarily an SEO problem.”

Re-Evaluate How Google Shows A Website

Mueller said it’s not primarily an SEO problem but  it’s possible most SEOs would disagree because anything that affects how a search engine shows a site is an SEO problem. Maybe Mueller meant that it should be seen as a strategic problem?

Regardless, John Mueller’s answer means that growing a site exponentially in a short amount of time could cause Google to re-evaluate a site because it’s essentially an an entirely new website, which might be an undesirable scenario.

Although Mueller didn’t specify how long a re-evaluation can take, he has indicated in the past that it can take months. Maybe things have changed but this is what he said four years ago about how long a sitewide evaluation takes:

“It takes a lot of time for us to understand how a website fits in with regards to the rest of the Internet.

…And that’s something that can easily take, I don’t know, a couple of months, a half a year, sometimes even longer than a half a year, for us to recognize significant changes in the site’s overall quality.”

The implication of a sitewide evaluation triggered by an exponential growth in content is that the optimized way to approach content growth is to do it in phases. It’s something to consider.

Listen to the Google SEO Office Hours podcast at the 4:24 minute mark:

Featured Image by Shutterstock/ShotPrime Studio

Google’s new weather prediction system combines AI with traditional physics

Researchers from Google have built a new weather prediction model that combines machine learning with more conventional techniques, potentially yielding accurate forecasts at a fraction of the current cost. 

The model, called NeuralGCM and described in a paper in Nature today, bridges a divide that’s grown among weather prediction experts in the last several years. 

While new machine-learning techniques that predict weather by learning from years of past data are extremely fast and efficient, they can struggle with long-term predictions. General circulation models, on the other hand, which have dominated weather prediction for the last 50 years, use complex equations to model changes in the atmosphere and give accurate projections, but they are exceedingly slow and expensive to run. Experts are divided on which tool will be most reliable going forward. But the new model from Google instead attempts to combine the two. 

“It’s not sort of physics versus AI. It’s really physics and AI together,” says Stephan Hoyer, an AI researcher at Google Research and a coauthor of the paper. 

The system still uses a conventional model to work out some of the large atmospheric changes required to make a prediction. It then incorporates AI, which tends to do well where those larger models fall flat—typically for predictions on scales smaller than about 25 kilometers, like those dealing with cloud formations or regional microclimates (San Francisco’s fog, for example). “That’s where we inject AI very selectively to correct the errors that accumulate on small scales,” Hoyer says.

The result, the researchers say, is a model that can produce quality predictions faster with less computational power. They say NeuralGCM is as accurate as one-to-15-day forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF), which is a partner organization in the research. 

But the real promise of technology like this is not in better weather predictions for your local area, says Aaron Hill, an assistant professor at the School of Meteorology at the University of Oklahoma, who was not involved in this research. Instead, it’s in larger-scale climate events that are prohibitively expensive to model with conventional techniques. The possibilities could range from predicting tropical cyclones with more notice to modeling more complex climate changes that are years away. 

“It’s so computationally intensive to simulate the globe over and over again or for long periods of time,” Hill says. That means the best climate models are hamstrung by the high costs of computing power, which presents a real bottleneck to research. 

AI-based models are indeed more compact. Once trained, typically on 40 years of historical weather data from ECMWF, a machine-learning model like Google’s GraphCast can run on less than 5,500 lines of code, compared with the nearly 377,000 lines required for the model from the National Oceanic and Atmospheric Administration, according to the paper. 

NeuralGCM, according to Hill, seems to make a strong case that AI can be brought in for particular elements of weather modeling to make things faster, while still keeping the strengths of conventional systems.

“We don’t have to throw away all the knowledge that we’ve gained over the last 100 years about how the atmosphere works,” he says. “We can actually integrate that with the power of AI and machine learning as well.”

Hoyer says using the model to predict short-term weather has been useful for validating its predictions, but that the goal is indeed to be able to use it for longer-term modeling, particularly for extreme weather risk. 

NeuralGCM will be open source. While Hoyer says he looks forward to having climate scientists use it in their research, the model may also be of interest to more than just academics. Commodities traders and agricultural planners pay top dollar for high-resolution predictions, and the models used by insurance companies for products like flood or extreme weather insurance are struggling to account for the impact of climate change. 

While many of the AI skeptics in weather forecasting have been won over by recent developments, according to Hill, the fast pace is hard for the research community to keep up with. “It’s gangbusters,” he says—it seems as if a new model is released by Google, Nvidia, or Huawei every two months. That makes it difficult for researchers to actually sort out which of the new tools will be most useful and apply for research grants accordingly. 

“The appetite is there [for AI],” Hill says. “But I think a lot of us still are waiting to see what happens.”

Correction: This story was updated to clarify that Stephan Hoyer is a researcher at Google Research, not Google DeepMind.

AI companies promised to self-regulate one year ago. What’s changed?

One year ago, on July 21, 2023, seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—committed with the White House to a set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.

These included promises to do things like improve the testing and transparency around AI systems, and share information on potential harms and risks. 

On the first anniversary of the voluntary commitments, MIT Technology Review asked the AI companies that signed the commitments for details on their work so far. Their replies show that the tech sector has made some welcome progress, with big caveats.

The voluntary commitments came at a time when generative AI mania was perhaps at its frothiest, with companies racing to launch their own models and make them bigger and better than their competitors’. At the same time, we started to see developments such as fights over copyright and deepfakes. A vocal lobby of influential tech players, such as Geoffrey Hinton, had also raised concerns that AI could pose an existential risk to humanity. Suddenly, everyone was talking about the urgent need to make AI safe, and regulators everywhere were under pressure to do something about it.

Until very recently, AI development has been a Wild West. Traditionally, the US has been loath to regulate its tech giants, instead relying on them to regulate themselves. The voluntary commitments are a good example of that: they were some of the first prescriptive rules for the AI sector in the US, but they remain voluntary and unenforceable. The White House has since issued an executive order, which expands on the commitments and also applies to other tech companies and government departments. 

“One year on, we see some good practices towards their own products, but [they’re] nowhere near where we need them to be in terms of good governance or protection of rights at large,” says Merve Hickok, the president and research director of the Center for AI and Digital Policy, who reviewed the companies’ replies as requested by MIT Technology Review. Many of these companies continue to push unsubstantiated claims about their products, such as saying that they can supersede human intelligence and capabilities, adds Hickok. 

One trend that emerged from the tech companies’ answers is that companies are doing more  to pursue technical fixes such as red-teaming (in which humans probe AI models for flaws) and watermarks for AI-generated content. 

But it’s not clear what the commitments have changed and whether the companies would have implemented these measures anyway, says Rishi Bommasani, the society lead at the Stanford Center for Research on Foundation Models, who also reviewed the responses for MIT Technology Review.  

One year is a long time in AI. Since the voluntary commitments were signed, Inflection AI founder Mustafa Suleyman has left the company and joined Microsoft to lead the company’s AI efforts. Inflection declined to comment. 

“We’re grateful for the progress leading companies have made toward fulfilling their voluntary commitments in addition to what is required by the executive order,” says Robyn Patterson, a spokesperson for the White House. But, Patterson adds, the president continues to call on Congress to pass bipartisan legislation on AI. 

Without comprehensive federal legislation, the best the US can do right now is to demand that companies follow through on these voluntary commitments, says Brandie Nonnecke, the director of the CITRIS Policy Lab at UC Berkeley. 

But it’s worth bearing in mind that “these are still companies that are essentially writing the exam by which they are evaluated,” says Nonnecke. “So we have to think carefully about whether or not they’re … verifying themselves in a way that is truly rigorous.” 

Here’s our assessment of the progress AI companies have made in the past year.

Commitment 1

The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.

All the companies (excluding Inflection, which chose not to comment) say they conduct red-teaming exercises that get both internal and external testers to probe their models for flaws and risks. OpenAI says it has a separate preparedness team that tests models for cybersecurity, chemical, biological, radiological, and nuclear threats and for situations where a sophisticated AI model can do or persuade a person to do things that might lead to harm. Anthropic and OpenAI also say they conduct these tests with external experts before launching their new models. For example, for the launch of Anthropic’s latest model, Claude 3.5, the company conducted predeployment testing with experts at the UK’s AI Safety Institute. Anthropic has also allowed METR, a research nonprofit, to do an “initial exploration” of Claude 3.5’s capabilities for autonomy. Google says it also conducts internal red-teaming to test the boundaries of its model, Gemini, around election-related content, societal risks, and national security concerns. Microsoft says it has worked with third-party evaluators at NewsGuard, an organization advancing journalistic integrity, to evaluate risks and mitigate the risk of abusive deepfakes in Microsoft’s text-to-image tool. In addition to red-teaming, Meta says, it evaluated its latest model, Llama 3, to understand its performance in a series of risk areas like weapons, cyberattacks, and child exploitation. 

But when it comes to testing, it’s not enough to just report that a company is taking actions, says Bommasani. For example, Amazon and Anthropic said they had worked with the nonprofit Thorn to combat risks to child safety posed by AI. Bommasani would have wanted to see more specifics about how the interventions that companies are implementing actually reduce those risks. 

“It should become clear to us that it’s not just that companies are doing things but those things are having the desired effect,” Bommasani says.  

RESULT: Good. The push for red-teaming and testing for a wide range of risks is a good and important one. However, Hickok would have liked to see independent researchers get broader access to companies’ models. 

Commitment 2

The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.

After they signed the commitments, Anthropic, Google, Microsoft, and OpenAI founded the Frontier Model Forum, a nonprofit that aims to facilitate discussions and actions on AI safety and responsibility. Amazon and Meta have also joined.  

Engaging with nonprofits that the AI companies funded themselves may not be in the spirit of the voluntary commitments, says Bommasani. But the Frontier Model Forum could be a way for these companies to cooperate with each other and pass on information about safety, which they normally could not do as competitors, he adds. 

“Even if they’re not going to be transparent to the public, one thing you might want is for them to at least collectively figure out mitigations to actually reduce risk,” says Bommasani. 

All of the seven signatories are also part of the Artificial Intelligence Safety Institute Consortium (AISIC), established by the National Institute of Standards and Technology (NIST), which develops guidelines and standards for AI policy and evaluation of AI performance. It is a large consortium consisting of a mix of public- and private-sector players. Google, Microsoft, and OpenAI also have representatives at the UN’s High-Level Advisory Body on Artificial Intelligence

Many of the labs also highlighted their research collaborations with academics. For example, Google is part of MLCommons, where it worked with academics on a cross-industry AI Safety Benchmark. Google also says it actively contributes tools and resources, such as computing credit, to projects like the National Science Foundation’s National AI Research Resource pilot, which aims to democratize AI research in the US.

Many of the companies also contributed to guidance by the Partnership on AI, another nonprofit founded by Amazon, Facebook, Google, DeepMind, Microsoft, and IBM, on the deployment of foundation models. 

RESULT: More work is needed. More information sharing is a welcome step as the industry tries to collectively make AI systems safe and trustworthy. However, it’s unclear how much of the effort advertised will actually lead to meaningful changes and how much is window dressing. 

Commitment 3

The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.

Many of the companies have implemented new cybersecurity measures in the past year. For example, Microsoft has launched the Secure Future Initiative to address the growing scale of cyberattacks. The company says its model weights are encrypted to mitigate the potential risk of model theft, and it applies strong identity and access controls when deploying highly capable proprietary models. 

Google too has launched an AI Cyber Defense Initiative. In May OpenAI shared six new measures it is developing to complement its existing cybersecurity practices, such as extending cryptographic protection to AI hardware. It also has a Cybersecurity Grant Program, which gives researchers access to its models to build cyber defenses. 

Amazon mentioned that it has also taken specific measures against attacks specific to generative AI, such as data poisoning and prompt injection, in which someone uses prompts that direct the language model to ignore its previous directions and safety guardrails.

Just a couple of days after signing the commitments, Anthropic published details about its protections, which include common cybersecurity practices such as controlling who has access to the models and sensitive assets such as model weights, and inspecting and controlling the third-party supply chain. The company also works with independent assessors to evaluate whether the controls it has designed meet its cybersecurity needs.

RESULT: Good. All of the companies did say they had taken extra measures to protect their models, although it doesn’t seem there is much consensus on the best way to protect AI models. 

Commitment 4

The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly. 

For this commitment, one of the most popular responses was to implement bug bounty programs, which reward people who find flaws in AI systems. Anthropic, Google, Microsoft, Meta, and OpenAI all have one for AI systems. Anthropic and Amazon also said they have forms on their websites where security researchers can submit vulnerability reports. 

It will likely take us years to figure out how to do third-party auditing well, says Brandie Nonnecke. “It’s not just a technical challenge. It’s a socio-technical challenge. And it just kind of takes years for us to figure out not only the technical standards of AI, but also socio-technical standards, and it’s messy and hard,” she says. 

Nonnecke says she worries that the first companies to implement third-party audits might set poor precedents for how to think about and address the socio-technical risks of AI. For example, audits might define, evaluate, and address some risks but overlook others.

RESULT: More work is needed. Bug bounties are great, but they’re nowhere near comprehensive enough. New laws, such as the EU’s AI Act, will require tech companies to conduct audits, and it would have been great to see tech companies share successful examples of such audits. 

Commitment 5

The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.

Many of the companies have built watermarks for AI-generated content. For example, Google launched SynthID, a watermarking tool for image, audio, text, and video generated by Gemini. Meta has a tool called Stable Signature for images, and AudioSeal for AI-generated speech. Amazon now adds an invisible watermark to all images generated by its Titan Image Generator. OpenAI also uses watermarks in Voice Engine, its custom voice model, and has built an image-detection classifier for images generated by DALL-E 3. Anthropic was the only company that hadn’t built a watermarking tool, because watermarks are mainly used in images, which the company’s Claude model doesn’t support. 

All the companies excluding Inflection, Anthropic, and Meta are also part of the Coalition for Content Provenance and Authenticity (C2PA), an industry coalition that embeds information about when content was created, and whether it was created or edited by AI, into an image’s metadata. Microsoft and OpenAI automatically attach the C2PA’s provenance metadata to images generated with DALL-E 3 and videos generated with Sora. While Meta is not a member, it announced it is using the C2PA standard to identify AI-generated images on its platforms. 

The six companies that signed the commitments have a “natural preference to more technical approaches to addressing risk,” says Bommasani, “and certainly watermarking in particular has this flavor.”  

“The natural question is: Does [the technical fix] meaningfully make progress and address the underlying social concerns that motivate why we want to know whether content is machine generated or not?” he adds. 

RESULT: Good. This is an encouraging result overall. While watermarking remains experimental and is still unreliable, it’s still good to see research around it and a commitment to the C2PA standard. It’s better than nothing, especially during a busy election year.  

Commitment 6

The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.

The White House’s commitments leave a lot of room for interpretation. For example, companies can technically meet this public reporting commitment with widely varying levels of transparency, as long as they do something in that general direction. 

The most common solutions tech companies offered here were so-called model cards. Each company calls them by a slightly different name, but in essence they act as a kind of product description for AI models. They can address anything from the model’s capabilities and limitations (including how it measures up against benchmarks on fairness and explainability) to veracity, robustness, governance, privacy, and security. Anthropic said it also tests models for potential safety issues that may arise later.

Microsoft has published an annual Responsible AI Transparency Report, which provides insight into how the company builds applications that use generative AI, make decisions, and oversees the deployment of those applications. The company also says it gives clear notice on where and how AI is used within its products.

RESULT: More work is needed. One area of improvement for AI companies would be to increase transparency on their governance structures and on the financial relationships between companies, Hickok says. She would also have liked to see companies be more public about data provenance, model training processes, safety incidents, and energy use. 

Commitment 7

The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them. 

Tech companies have been busy on the safety research front, and they have embedded their findings into products. Amazon has built guardrails for Amazon Bedrock that can detect hallucinations and can apply safety, privacy, and truthfulness protections. Anthropic says it employs a team of researchers dedicated to researching societal risks and privacy. In the past year, the company has pushed out research on deception, jailbreaking, strategies to mitigate discrimination, and emergent capabilities such as models’ ability to tamper with their own code or engage in persuasion. And OpenAI says it has trained its models to avoid producing hateful content and refuse to generate output on hateful or extremist content. It trained its GPT-4V to refuse many requests that require drawing from stereotypes to answer. Google DeepMind has also released research to evaluate dangerous capabilities, and the company has done a study on misuses of generative AI. 

All of them have poured a lot of money into this area of research. For example, Google has invested millions into creating a new AI Safety Fund to promote research in the field through the Frontier Model Forum. Microsoft says it has committed $20 million in compute credits to researching societal risks through the National AI Research Resource and started its own AI model research accelerator program for academics, called the Accelerating Foundation Models Research program. The company has also hired 24 research fellows focusing on AI and society. 

RESULT: Very good. This is an easy commitment to meet, as the signatories are some of the biggest and richest corporate AI research labs in the world. While more research into how to make AI systems safe is a welcome step, critics say that the focus on safety research takes attention and resources from AI research that focuses on more immediate harms, such as discrimination and bias. 

Commitment 8

The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.

Since making this commitment, tech companies have tackled a diverse set of problems. For example, Pfizer used Claude to assess trends in cancer treatment research after gathering relevant data and scientific content, and Gilead, an American biopharmaceutical company, used generative AI from Amazon Web Services to do feasibility evaluations on clinical studies and analyze data sets. 

Google DeepMind has a particularly strong track record in pushing out AI tools that can help scientists. For example, AlphaFold 3 can predict the structure and interactions of all life’s molecules. AlphaGeometry can solve geometry problems at a level comparable with the world’s brightest high school mathematicians. And GraphCast is an AI model that is able to make medium-range weather forecasts. Meanwhile, Microsoft has used satellite imagery and AI to improve responses to wildfires in Maui and map climate-vulnerable populations, which helps researchers expose risks such as food insecurity, forced migration, and disease. 

OpenAI, meanwhile, has announced partnerships and funding for various research projects, such as one looking at how multimodal AI models can be used safely by educators and by scientists in laboratory settings It has also offered credits to help researchers use its platforms during hackathons on clean energy development.  

RESULT: Very good. Some of the work on using AI to boost scientific discovery or predict weather events is genuinely exciting. AI companies haven’t used AI to prevent cancer yet, but that’s a pretty high bar. 

Overall, there have been some positive changes in the way AI has been built, such as red-teaming practices, watermarks and new ways for industry to share best practices. However, these are only a couple of neat technical solutions to the messy socio-technical problem that is AI harm, and a lot more work is needed. One year on, it is also odd to see the commitments talk about a very particular type of AI safety that focuses on hypothetical risks, such bioweapons, and completely fail to mention consumer protection, nonconsensual deepfakes, data and copyright, and the environmental footprint of AI models. These seem like weird omissions today. 

How to Audit Content for Better SEO

Google’s frequent algorithm updates produce nonstop fluctuations in organic traffic and rankings. Content audits are essential to identify lost rankings, revive declining traffic, and find new topics.

Here’s how I audit content for better search engine optimization.

Evaluate Lost Traffic

Traffic declines from organic search can occur for multiple reasons.

  • Over time, new articles push current ones deep into the archives. Updating articles with substantially new info and a new date will likely help regain traffic.
  • Google’s core updates often focus on searchers’ intent. Many queries imply multiple intents. For example, a search for “backyard barbeque” could suggest a need for information (how to cook barbeque) or purchase options (where to buy a grill). This type of loss is not easy to fix, likely requiring a new landing page or article.
  • Rankings of competitors’ pages surpassed yours. Shifting from position 1 to 2 can result in huge traffic losses. That, too, is not easy to fix. Closely analyze your competitors’ pages and backlinks to identify what may have led to their ranking increase.
  • Changes by Google in search result pages, such as adding sections and lowering the visibility of organic results. There’s no fix.

Regardless of the cause, the first step is to know which pages account for the loss. Search Console is an easy and free tool for diagnosing organic traffic drops.

  1. Go to the “Performance” section.
  1. Click “Search results” > “Date” > “Compare.”
  1. Select “Compare last 3 months to previous period” or “Compare last 3 months year over year,” depending on your audit frequency.
  1. Click on the “Pages” and sort by “Clicks difference.”
  1. Click any URL in the first column.
  1. Click “Queries” and sort again by “Clicks difference” to see the search terms causing the biggest traffic drops.

Now search for those queries (keywords) in Google to evaluate what may have caused the decrease.

Screenshot of Search Console's Performance section

Search Console is an easy and free tool for diagnosing organic traffic drops. Click image to enlarge.

Identify Opportunities

There are two main ways to improve your organic search performance.

Identify and improve page 1 rankings

Even climbing one position on the first page will likely increase traffic substantially. Optimizing the page copy by adding more keywords and creating more internal backlinks will likely help.

To identify these pages, use Search Console’s Performance section again.

  • Click “Search results” > “Average position” on top of the graph.
  • Click the filter option (inverted triangle) and select “Position.”
  • Click “Filter by Position” for “Greater than” 2.

Sort the queries by the number of clicks to see the top (and bottom) performers. Again, a page that climbs even one position will experience a huge traffic increase.

In Search Console, filter “Average Position” for “Greater than” 2. Click image to enlarge.

Identify and improve pages with external backlinks

Pages with many backlinks but few searchable keywords often fail to rank. For those pages, research keywords to help Google understand them.

Semrush’s backlink research tool can identify top-linked pages:

  • Log in to Semrush and click “Backlink Analytics” under “Link Building.”
  • Paste your URL and select “Root domain.”
  • Click the “Indexed pages” tab.

The report is sorted by the number of linked domains for each page on your site. Heavily linked pages are easier to improve, rankings-wise.

Use Search Console for ranking opportunities, or run keyword research tools to find new queries.

Screenshot of Semrush's backlink research tool.

Semrush’s backlink research tool can identify top-linked pages. Click image to enlarge.

Brilliance CEO on Acquiring a Food Manufacturer

Lori McDonald is the founder and CEO of Brilliance Business Solutions, a prominent B2B ecommerce development firm. As of February 2024, she’s also the owner of a specialty food manufacturer called Norsland Lefse.

Why would a busy web development executive acquire a direct-to-consumer producer? I asked her that question and more in our recent conversation. The entire audio is embedded below. The transcript is edited for clarity and length.

Kerry Murdock: You’re the CEO of a leading ecommerce development company. Why acquire a food manufacturer?

Lori McDonald: We specialize in working with mid-market manufacturers and distributors. We purchased a small manufacturing company to diversify how we make money and to understand our customers better.

It’s been a great learning experience for me and the Brilliance team. We’re just getting started.

The business is called Norsland Lefse. It’s smaller than our target development customers, but it still provides insight and empathy into what our clients experience daily. We closed on the deal in February 2024.

Murdock: Tell us about Norsland.

McDonald: Norsland Lefse is a 40-year-old specialty food manufacturer and distributor in Rushford, Minnesota. Our main product is Lefse, a Norwegian flatbread similar to a potato-based tortilla.

We also make Uffda! Chips, which is a crisp and fried byproduct of Lefse. We also resell other Norwegian and Scandinavian products.

Norsland has about 20 employees. Most are part-time. We’re a little more than $1 million in annual revenue across 270 SKUs. The majority of our sales are on Amazon. We also have a direct-to-consumer ecommerce site, NorslandLefse.com.

The company was small enough that we could afford it, with revenue from ecommerce.

Murdock: Shipping food sounds complicated.

McDonald: It’s been another learning experience! Most of our products are perishable. Lefse is perishable — we don’t add preservatives — as are other items. We store them in freezers and refrigerators. We ship FedEx 2Day with ice packs on most orders. Shipping speed and operational efficiency are critical.

Murdock: You’re an expert in ecommerce platforms and backend systems. What is Norsland’s setup?

McDonald: We don’t have much technology in place, but it’s coming. We manually sync inventory and orders on Amazon. We use XPS Ship to manage shipping; it integrates with Amazon.

We moved our ecommerce site from Wix to BigCommerce, a Brilliance partner. We completed the migration in just three months. It has given us many new capabilities. We also redesigned the site and made it mobile-friendly.

The Brilliance team handled the replatform and design work. I was on the client side, which was fun and educational. I was the customer.

We’re making progress, but we still have a big to-do list.

Murdock: What are the surprises thus far?

McDonald: Right away we did a profitability review of the products on Amazon. We were losing money on some items. I learned how easy it is to lose money on Amazon without realizing it.

We have a lot more orders coming from Amazon than our own site. Growing sales directly on NorslandLefse.com has been a goal from day one. The margins are better, but implementation is challenging. That’s been an eye-opener too.

Our clients at Brilliance could have thousands of SKUs. We have just 270 at Norsland. I now appreciate the complexity of managing margins for many items across different channels.

I’m also more aware of merchants’ daily operational tasks and components. We might have equipment breaking, staff turnover, inventory glitches, supplier price increases — so many things are happening. At Brilliance, working with clients, we perceive our work to be important. But I see now how our clients have much more to worry about.

Murdock: You’ve alluded to plans and goals. Could you address some of them?

McDonald: We’re not looking to expand SKUs. In some ways, less is more. Our two key products are Lefse and Uffda! Chips. We need to ensure we’re marketing and selling those well. We may offer a Lefse subscription so folks can order and have it delivered year-round.

We partnered from the start with Ox Optimal, a digital marketing firm. Brilliance focuses on the technical build and design of a site. Ox Optimal provides expertise in email marketing and Facebook and Google ads.

Murdock: We’re looking forward to following your progress. How can listeners reach you or buy some Lefse?

McDonald: BrillianceWeb.com is our development firm. Norsland Lefse is at NorslandLefse.com. I’m on LinkedIn.