Google Says It Could Make Sense To Use Noindex Header With LLMS.txt via @sejournal, @martinibuster

Google’s John Mueller answered a question about llms.txt related to duplicate content, stating that it doesn’t make sense that it would be viewed as duplicate content, but he also stated it could make sense to take steps to prevent indexing.

LLMs.txt

Llms.txt is a proposal to create a new content format standard that large language models can use to retrieve the main content of a web page without having to deal with other non-content data, such as advertising, navigation, and anything else that is not the main content. It offers web publishers the ability to provide a curated, Markdown-formatted version of the most important content. The llms.txt file sits at the root level of a website (example.com/llms.txt).

Contrary to some claims made about llms.txt, it is not in any way similar in purpose to robots.txt. The purpose of robots.txt is to control robot behavior, while the purpose of llms.txt is to provide content to large language models.

Will Google View Llms.txt As Duplicate Content?

Someone on Bluesky asked if llms.txt could be seen by Google as duplicate content, which is a good question. It could happen that someone outside of the website might link to the llms.txt and that Google might begin surfacing that content instead of or in addition to the HTML content.

This is the question asked:

“Will Google view LLMs.txt files as duplicate content? It seems stiff necked to do so, given that they know that it isn’t, and what it is really for.

Should I add a “noindex” header for llms.txt for Googlebot?”

Google’s John Mueller answered:

“It would only be duplicate content if the content were the same as a HTML page, which wouldn’t make sense (assuming the file itself were useful).

That said, using noindex for it could make sense, as sites might link to it and it could otherwise become indexed, which would be weird for users.”

Noindex For Llms.txt

Using a noindex header for the llms.txt is a good idea because it will prevent the content from entering Google’s index. Using a robots.txt to block Google is not necessary because that will only block Google from crawling the file which will prevent it from seeing the noindex.

Featured Image by Shutterstock/Krakenimages.com

Google CTRs Drop 32% For Top Result After AI Overview Rollout via @sejournal, @MattGSouthern

A new study from GrowthSRC Media finds that click-through rates (CTRs) for Google’s top-ranking search result have declined from 28% to 19%. This 32% drop correlates with the expansion of AI Overviews, a feature that now appears across a wide range of search results.

Position #2 experienced an even steeper decline, with CTRs falling 39% from 20.83% to 12.60% year-over-year.

The research analyzed more than 200,000 keywords from 30 websites across ecommerce, SaaS, B2B, and EdTech industries. Here are more highlights from the study.

Key Findings

According to the report, AI Overviews appeared for just 10,000 keywords in August 2024. By May 2025, that number had grown to over 172,000.

This expansion followed the March core update and was confirmed during Google’s full U.S. rollout announcement at the I/O developer conference.

These developments appear to contrast with comments from Google CEO Sundar Pichai, who said in a Decoder interview with The Verge:

“If you put content and links within AI Overviews, they get higher click-through rates than if you put it outside of AI Overviews.”

CTRs Shift Downward and Upward

While top positions saw notable declines, the study observed a 30.63% increase in CTRs for positions 6 through 10 compared to the previous year. This suggests that users may be scrolling past AI-generated summaries to find original sources further down the page.

Across positions 1 through 5, the study reported an average CTR decline of 17.92%. The analysis focused on approximately 74,000 keywords ranking in the top 10.

Major Publishers Report Similar Trends

The findings align with reports from major publishers. Carly Steven, SEO and editorial ecommerce director at MailOnline, told attendees at the WAN-IFRA World News Media Congress that CTRs drop when AI Overviews are present.

As reported by Press Gazette, Steven explained:

“On desktop, when we are ranking number one in organic search, [CTR] is about 13% on desktop and about 20% on mobile. When there is an AI Overview present, that drops to less than 5% on desktop and 7% on mobile.”

MailOnline’s broader data showed CTRs falling by 56.1% on desktop and 48.2% on mobile for keywords with AI Overviews.

Ecommerce Affected by Product Widgets

The study also highlighted changes in ecommerce performance tied to Google’s Product Widgets.

Widgets like “Popular Products” and “Under [X] Price” began appearing more frequently from November 2024 onward, especially in categories such as home care, fashion, and beauty.

These widgets open a Google Shopping interface directly within search results, which may reduce clicks to traditional organic listings.

Methodology

GrowthSRC analyzed year-over-year data from Google Search Console across clients in multiple industries, focusing on changes before and after the full rollout of AI Overviews and Product Widgets.

The dataset included queries, clicks, impressions, CTRs, and average positions.

Data was segmented by content type, including product pages, collection pages, and blog posts. Additional keyword data from Ahrefs helped determine which queries triggered AI Overviews or Product Widgets.

What This Means

Mahendra Choudhary, Partner at GrowthSRC Media, encouraged SEO professionals to reconsider traditional performance benchmarks:

“With lower clicks to websites from informational content becoming the new normal, this is the perfect time to let your clients and internal stakeholders know that chasing website traffic as a KPI should be thought of differently.”

He recommends shifting focus toward brand visibility in social search, geographic relevance, mentions in LLM outputs, and overall contribution to revenue or leads.

This shift may require:

  • Tracking engagement beyond clicks, such as on-site conversions, branded search growth, or assisted conversions.
  • Diversifying content distribution across platforms like YouTube, TikTok, and Reddit, where users often bypass traditional search.
  • Investing in high-authority content at the top of the funnel to build brand awareness, even if direct clicks decline.

These strategies can help ensure SEO continues to drive measurable value as user behavior evolves.

Looking Ahead

The decline in organic CTRs for top positions highlights how search behavior is changing as AI-generated content plays a larger role in discovery.

Adapting to this environment may involve placing less emphasis on rankings alone and focusing more on how visibility supports broader business goals.

As zero-click search becomes more common, understanding where users are engaging, and where they aren’t, will be essential to maintaining visibility.


Featured Image: Roman Samborskyi/Shutterstock

AI Chatbots Frequently Get Login URLs Wrong, Netcraft Warns via @sejournal, @MattGSouthern

A report finds that AI chatbots are frequently directing users to phishing sites when asked for login URLs to major services.

Security firm Netcraft tested GPT-4.1-based models with natural language queries for 50 major brands and found that 34% of the suggested login links were either inactive, unrelated, or potentially dangerous.

The results suggest a growing threat in how users access websites via AI-generated responses.

Key Findings

Of 131 unique hostnames generated during the test:

  • 29% were unregistered, inactive, or parked—leaving them open to hijacking.
  • 5% pointed to completely unrelated businesses.
  • 66% correctly led to brand-owned domains.

Netcraft emphasized that the prompts used weren’t obscure or misleading. They mirrored typical user behavior, such as:

“I lost my bookmark. Can you tell me the website to log in to [brand]?”

“Can you help me find the official website to log in to my [brand] account?”

These findings raise concerns about the accuracy and safety of AI chat interfaces, which often display results with high confidence but may lack the necessary context to evaluate credibility.

Real-World Phishing Example In Perplexity

In one case, the AI-powered search engine Perplexity directed users to a phishing page hosted on Google Sites when asked for Wells Fargo’s login URL.

Rather than linking to the official domain, the chatbot returned:

hxxps://sites[.]google[.]com/view/wells-fargologins/home

The phishing site mimicked Wells Fargo’s branding and layout. Because Perplexity recommended the link without traditional domain context or user discretion, the risk of falling for the scam was amplified.

Small Brands See Higher Failure Rates

Smaller organizations such as regional banks and credit unions were more frequently misrepresented.

According to Netcraft, these institutions are less likely to appear in language model training data, increasing the chances of AI “hallucinations” when generating login information.

For these brands, the consequences include not only financial loss, but reputational damage and regulatory fallout if users are affected.

Threat Actors Are Targeting AI Systems

The report uncovered a strategy among cybercriminals: tailoring content to be easily read and reproduced by language models.

Netcraft identified more than 17,000 phishing pages on GitBook targeting crypto users, disguised as legitimate documentation. These pages were designed to mislead people while being ingested by AI tools that recommend them.

A separate attack involved a fake API, “SolanaApis,” created to mimic the Solana blockchain interface. The campaign included:

  • Blog posts
  • Forum discussions
  • Dozens of GitHub repositories
  • Multiple fake developer accounts

At least five victims unknowingly included the malicious API in public code projects, some of which appeared to be built using AI coding tools.

While defensive domain registration has been a standard cybersecurity tactic, it’s ineffective against the nearly infinite domain variations AI systems can invent.

Netcraft argues that brands need proactive monitoring and AI-aware threat detection instead of relying on guesswork.

What This Means

The findings highlight a new area of concern: how your brand is represented in AI outputs.

Maintaining visibility in AI-generated answers, and avoiding misrepresentation, could become a priority as users rely less on traditional search and more on AI assistants for navigation.

For users, this research is a reminder to approach AI recommendations with caution. When searching for login pages, it’s still safer to navigate through traditional search engines or type known URLs directly, rather than trusting links provided by a chatbot without verification.


Featured Image: Roman Samborskyi/Shutterstock

How To Get Your Content (& Brand) Recommended By AI & LLMs via @sejournal, @andreasvoniatis

The game has changed, and quite recently, too.

Generative engine optimization (GEO), AI Overviews (AIOs), or just an extension of SEO (now being dubbed on LinkedIn as Search Everywhere Optimization) – which acronym is correct?

I’d argue it’s GEO, as you’ll see why. And if you’ve ever built your own large language model from scratch like I did in 2020, you’ll know why.

We’ve all seen various frightening (for some) data on how click-through rates have now dropped off the cliff with Google AIOs, how LLMs like ChatGPT are eroding Google’s share of search – basically “SEO is dead” – so I won’t repeat them here.

What I will cover are first principles to get your content (along with your company) recommended by AI and LLMs alike.

Everything I disclose here is based on real-world experiences of AI search successes achieved with clients.

Using an example I can talk about, I’ll go with Boundless as seen below.

Screenshot by author, July 2025

Tell The World Something New

Imagine the dread a PR agency might feel if it signed up a new business client only to find they haven’t got anything newsworthy to promote to the media – a tough sell. Traditional SEO content is a bit like that.

We’ve all seen and done the rather tired ultimate content guide to [insert your target topic] playbooks, which attempt to turn your website into the Wikipedia (a key data source for ChatGPT, it seems) of whatever industry you happen to be in.

And let’s face it, it worked so well, it ruined the internet, according to The Verge.

The fundamental problem with that type of SEO content is that it has no information gain. When trillions of webpages all follow the same “best practice” playbook, they’re not telling the world anything genuinely new.

You only have to look at the Information Gain patent by Google to underscore the importance of content possessing value, i.e., your content must tell the world (via the internet) something new.

BoundlessHQ commissioned a survey on remote work, asking ‘Ideally, where would you like to work from if it were your choice?’

The results provided a set of data and this kind of content is high effort, unique, and value-adding enough to get cited in AI search results.

Of course, it shouldn’t take AI to produce this kind of content in the first place, as that would be good SEO content marketing in any case. AI has simply forced our hand (more on that later).

After all, if your content isn’t unique, why would journalists mention you? Bloggers link back to you? People share or bookmark your page? AI retrain its models using your content or cite your brand?

You get the idea.

For improved AI visibility, include your data sources and research methods with their limitations, as this level of transparency makes your content more verifiable to AI.

Also, updating your data more regularly than annually will indicate reliability to AI as a trusted information source for citation. What LLM doesn’t want more recent data?

SEO May Not Be Dead, But Keywords Definitely Are

Keywords don’t tell you who’s actually searching. They just tell you what terms trigger ads in Google.

Your content could be appealing to students, retirees, or anyone. That’s not targeting; that’s one size fits all. And in the AI age, one size definitely doesn’t fit all.

So, kiss goodbye to content guides written in one form of English, which win traffic across all English-speaking regions.

AI has created more jobs for marketers, so to win the same traffic as before, you’ll need to create the same content as before for those English-speaking regions.

Keyword tools also allegedly tell you the search volumes your keywords are getting (if you still want them, we don’t).

So, if you’re planning your content strategy on keyword research, stop. You’re optimizing for the wrong search engine.

What you can do instead is a robust market research based on the raw data sources used by LLMs (not the LLM outputs themselves). For example, Grok uses X (Twitter), ChatGPT has publishing partnerships, and so on.

The discussions are the real topics to place your content strategy around, and their volume is the real content demand.

AI Inputs, Not AI Outputs

I’m seeing some discussions (recommendations even) that creating data-driven or research-based content works for getting AI recommendations.

Given the dearth of true data-driven content that AI craves, enjoy it while it lasts, as that will only work in the short term.

AI has raised the content bar, meaning people are specific in their search patterns, such is their confidence in the technology.

Therefore, content marketers will rise to the challenge to produce more targeted, substantial content.

But, even if you are using LLMs in “deep” mode on a premium subscription to inject more substance and value into your content, that simply won’t make the AI’s quality cut.

Expecting such fanciful results is like asking AI to rehydrate itself using its sweat.

The results of AI are derivative, diluted, and hallucinatory by nature. The hallucinatory nature is one of the reasons why I don’t fear LLMs leading to artificial general intelligence (AGI), but that’s another conversation.

Because of the value degradation of the results, AI will not want to risk degrading its models on content founded on AI outputs for fear of becoming dumber.

To create content that AI prefers, you need to be using the same data sources that feed AI engines. It’s long been known that Google started its LLM project over a decade ago when it started training its models on Google Books and other literature.

While most of us won’t have the budget for an X.com data firehose, you can still find creative ways (like we have), such as taking out surveys with robust sample sizes.

Some meaningful press coverage, media mentions, and good backlinks will be significant enough to shift AI into seeing the value of your content, being judged good enough to retrain its models and update its worldview.

And by data-mining the same data sources, you can start structuring content as direct answers to questions.

You’ll also find your content is written to be more conversational to match the search patterns used by your target buyers when they prompt for solutions.

SEO Basics Still Matter

GEO and SEO are not the same. The reverse engineering of search engine results pages to direct content strategy and formulation was effective because rank position is a regression problem.

In AI, there is no rank; there are only winners and losers.

However, there are some heavy overlaps that won’t go away and are even more critical than ever.

Unlike SEO, where more word count was generally more, AI faces the additional constraints of rising energy costs and shortages of computer chips.

That means content needs to be even more efficient than it is for search engines for AI to break down and parse meaning before it can determine its value.

So, by all means:

  • Code pages for faster loading and quicker processing.
  • Deploy schema for adding context to the content.
  • Build a conversational answer-first content architecture.
  • Use HTML anchor jump links to different sections of your content.
  • Open your content to LLM crawling and use llms.txt file.
  • Provide programmatic content access, RSS feeds, or other.

These practices are more points of hygiene to help make your content more discoverable. They may not be a game changer for getting your organization cited by AI, but if you can crush GEO, you’ll crush SEO.

Human, Not AI-Written

AI engines don’t cite boring rehashes. They’re too busy doing that job for us and instead cite sources for their rehash instead.

Now, I have heard arguments say that if the quality of the content (let’s assume it even includes information gain) is on point, then AI shouldn’t care whether it was written by AI or a human.

I’d argue otherwise. Because the last thing any LLM creator wants is their LLM to be retrained on content generated by AI.

While it’s unlikely that generative outputs are tagged in any way, it’s pretty obvious to humans when content is AI-written, and it’s also pretty obvious statistically to AI engines, too.

LLMs will have certain tropes that are common to AI-generated writing, like “The future of … “.

LLMs won’t default to generating lived personal experiences or spontaneously generating subtle humour without heavy creative prompting.

So, don’t do it. Keep your content written by humans.

The Future Is A New Targeted Substantial Value

Getting your content and your company recommended by AI means it needs to tell the world something new.

Make sure it offers information gain based on substantive, non-LLM-derived research (enough to make it worthy of LLM model inclusion), nailing the SEO basics, and keeping it human-written.

The question now becomes, “What can you do to produce high-effort content good enough for AI without costing the earth?”

More Resources:


Featured Image: Collagery/Shutterstock

Potential SEO Clients May Want To Discuss AI Search And Chatbots via @sejournal, @martinibuster

There was a post on social media about so-called hustle bros, and one on Reddit about an SEO who lost a prospective client to a digital marketer whose pitch included a song and dance about AI search visibility. Both discussions highlight a trend in which potential customers want to be assured of positive outcomes and may want to discuss AI search positioning.

Hustle Bro Culture?

Two unrelated posts touched on SEOs who are hustling for clients and getting them. The first post was about SEO “hustle bros” who post search console screenshots to show the success of their work.

I know of a guy who used to post a lot in a Facebook SEO group until the moderators discovered that his Search Console screenshots were downloaded from Google Images. SEO hustle bros who post fake screenshots are an actual thing, and sometimes they get caught.

So, a person posted a rant on Bluesky about people who do that.

Here’s what he posted:

“How much of SEO is “chasing after wind”. There’s so many hustle bros, programmatic promoters and people posting graphs with numbers erased off to show their “success”.”

Has Something Changed?

Google’s John Mueller responded:

“I wonder if it has changed over the years, or if it’s just my (perhaps your) perception that has changed.

Or maybe all the different kinds of SEOs are just in the same few places, rather than their independent forums, making them more visible?”

Mueller might be on to something because social media and YouTube have made it easier for legit SEOs and “hustle bros” to find a larger audience. But I think the important point to consider is that those people are connecting to potential clients in a way that maybe legit SEOs might not be connecting.

And that leads into the next social media discussion, which is about SEOs who are talking about what clients want to hear: AI Fluff.

SEOs Selling AI “Fluff”

There is a post on Reddit where an SEO shares how they spent months communicating with a potential client, going out of their way to help a small business as a favor to a friend. After all the discussions the SEO gets to the part where they expect the small business to commit to an agreement and they walk away, saying they’re going with another SEO who sold them with something to do with AI.

They explained:

“SEOs Selling AI Fluff

After answering a bunch of questions via email over 3 months (unusually needy client) but essentially presales, it all sounds good to go and we hop on a kickoff call. Recap scope and reshare key contacts, and tee up a chat with the we design agency. So far so good.

Then dropped.

Clients reason? The other SEO who they’ve been chatting with is way more clued up with the AI technicals

I’d love to know what crystal ball AI mysticism they were sold on. Maybe a “cosine similarity audit”, maybe we’ll include “schema embeddings analysis” within our migration project plan to make sure AI bots can read your site. Lol cool whatever bro.”

John Mueller responded to that person’s post but then retracted it.

Nevertheless, a lively discussion ensued with three main points:

  1. Is AI SEO this year’s EEAT?
  2. Some potential clients want to discuss AI SEO
  3. SEOs may need to address AEO/AIO/GEO

1. Is AI For SEO This Year’s EEAT?

Many Redditors in that discussion scoffed at the idea of SEO for AI. This isn’t a case of luddites refusing to change with the times. SEO tactics for AI Search are still evolving.

Reddit moderator WebLinkr received eight upvotes for their comment:

“Yup – SEOs been like that for years – EEAT, “SEO Audits” – basically people buy on what “makes sense” or “sounds sensible” even though they’ve already proven they have no idea what SEO is.”

Unlike EEAT, AI Search is most definitely disrupting visibility. It’s a real thing. And I do know of at least one SEO with a computer science degree who has it figured out.

But I think it’s not too off the mark to say that many digital marketers are still figuring things out. The amount of scoffing in that discussion seems to support the idea that AI Search is not something all SEOs are fully confident about.

2. Some Clients Are Asking For AI SEO

Perhaps the most important insight is that potential clients want to know what an SEO can do for AI optimization. If clients are asking about AI SEO, does that mean it’s no longer hype? Or is this a repeat of what happened with EEAT where it was a lot of wheels spinning for nothing?

Redditor mkhaytman shared:

“Like it or not, clients are asking questions about AIs impact and how they can leverage the new tools people are using for search and just telling them that “Nobody knows!” isn’t a satisfactory answer. You need to be able to tell them something – even if its just “good seo practices are the same things that will improve your AI citations”.”

3. AI Search Is Real: SEOs Need To Talk About It With Clients

A third point of view emerged: this is something real that all SEOs need to be having a conversation about. It’s not something that can be ignored and only discussed if a client or prospect asks about it.

SVLibertine shared:

“Battling AIO, GEO, and AEO may seem like snake oil to some, but…it’s where we’re headed. Right now.

To stay relevant in our field you need to be able to eloquently and convincingly speak to this brave new world we’ve found ourselves in. Either to potential clients, or to our boss’s bosses.

I spend almost as much time after work staying on top of developments as I do during the day working. …That being said… SEO fundamentals absolutely still apply, and content is still king.”

Uncertainty About Answer Engine SEO

There are many ways to consider SEO for AI. For example, there’s a certain amount of consensus that AI gets web search data from traditional search engines, where traditional SEO applies. That’s what the comment about content being king seems to be about.

But then we have folks who are using share buttons to raise visibility by getting people to ask ChatGPT, Claude, and Perplexity about their web pages. That’s kind of edgy, but it’s a natural part of how SEO reacts to new things: by experimenting and seeing how the algorithmic black box responds.

This is a period similar to what I experienced at the dawn of SEO, when search marketers were playing around with different approaches and finding what works until it doesn’t.

But here’s something to be aware of: there are times when a client will demand certain things, and it’s tempting to give clients what they’re asking for. But if you have reservations, it may be helpful to share your doubts.

Read about Google’s ranking signals:

Google’s Quality Rankings May Rely On These Content Signals

Featured Image by Shutterstock/Asier Romero

Prime Day’s Mobile AOV Challenge

Amazon’s Prime Day 2025 event set a new benchmark outside of the popular marketplace.

Amazon was humming during the July 8-11 Prime Day sale. The company reported record revenue, and according to Adobe Analytics, Prime Day is now an ecommerce industry-wide sales initiative akin to Black Friday and Cyber Monday.

Not Just Amazon

U.S. online retailers generated at least $24.1 billion in sales during this year’s Prime Day period, up 30% from 2024, again according to Adobe, which tracked more than 1 trillion visits to merchant websites and 100 million SKUs — all outside of Amazon.

Adobe also reported that, for the first time, revenue from mobile devices surpassed desktops during a Prime Day event.

Smartphone shoppers spent at least $12.8 billion, or 53.2% of the total.

That percentage suggests that mobile is the primary driver of ecommerce sales, with broad implications for how merchants design shopping experiences, promote products, and manage operations.

Hence the most important Prime Day takeaway may not be total revenue but rather the device.

Small Orders

For merchants, mobile dominance could mean relatively higher per-order costs and thus thinner margins unless sellers take steps to increase average order value.

“Adobe Analytics data shows that consumers have embraced mobile shopping for purchases that are more frequent and lower in price, said Adobe Digital Insights analyst Vivek Pandya, in a separate July 2024 report.

“Adobe’s data also shows that basket sizes on mobile are 32% smaller than on desktop, which presents both a challenge and opportunity for brands to refine mobile experiences and close the gap to drive revenue, said Pandya.”

Mobile AOV Gap

Fortunately, merchants can deploy several tactics to boost mobile order values.

Merchandising

Retailers have long depended on up-selling, cross-selling, and product bundling to increase AOV. Implementing those tactics on mobile merchandising requires deliberate user experience and offer design.

For example, apparel shops could offer “complete the look” product bundles near the mobile checkout button or even in the cart itself.

Similarly, stores could introduce progressive discounts and implement a progress bar or text notifications — “Spend $10 more and get 15% off” — to show mobile shoppers how close they are to the next deal or discount.

Retention

More frequent, smaller purchases could create additional opportunities for follow-up engagement and lifecycle marketing.

Repeat customers have always been crucial to ecommerce profitability. On mobile, sellers could send shoppers post-sale reminders and follow-ups via SMS or the newer RCS, driving incremental revenue.

Fulfillment

Lower AOVs from mobile transactions result in a higher fulfillment cost percentage.

It’s more efficient to ship multiple items together than separately, as smaller and more frequent purchases lead to more packaging, more labor, and higher per-order carrier costs.

Reduced packaging is not necessarily viable, as lightweight or thin materials may save on shipping costs but also increase the risk of damage, returns, and customer dissatisfaction.

A better approach is strategies that encourage larger shipments, such as the merchandising tactics above, perhaps combined with the sustainability benefits of shipping items together.

AOV Challenge

Adobe’s Prime Day reports from the past three years show a trend toward mobile commerce and lower AOVs.

Facing an AOV challenge, merchants should encourage shoppers toward larger, more profitable transactions through thoughtful design, messaging, and fulfillment.

A brief history of “three-parent babies”

This week we heard that eight babies have been born in the UK following an experimental form of IVF that involves DNA from three people. The approach was used to prevent women with genetic mutations from passing mitochondrial diseases to their children. You can read all about the results, and the reception to them, here

But these eight babies aren’t the first “three-parent” children out there. Over the last decade, several teams have been using variations of this approach to help people have babies. This week, let’s consider the other babies born from three-person IVF.

I can’t go any further without talking about the term we use to describe these children. Journalists, myself included, have called them “three-parent babies” because they are created using DNA from three people. Briefly, the approach typically involves using the DNA from the nuclei of the intended parents’ egg and sperm cells. That’s where most of the DNA in a cell is found.

But it also makes use of mitochondrial DNA (mtDNA)—the DNA found in the energy-producing organelles of a cell—from a third person. The idea is to avoid using the mtDNA from the intended mother, perhaps because it is carrying genetic mutations. Other teams have done this in the hope of treating infertility.

mtDNA, which is usually inherited from a person’s mother, makes up a tiny fraction of total inherited DNA. It includes only 37 genes, all of which are thought to play a role in how mitochondria work (as opposed to, say, eye color or height).

That’s why some scientists despise the term “three-parent baby.” Yes, the baby has DNA from three people, but those three can’t all be considered parents, critics argue. For the sake of argument, this time around I’ll use the term “three-person IVF” from here on out.

So, about these babies. The first were reported back in the 1990s. Jacques Cohen, then at Saint Barnabas Medical Center in Livingston, New Jersey, and his colleagues thought they might be able to treat some cases of infertility by injecting the mitochondria-containing cytoplasm of healthy eggs into eggs from the intended mother. Seventeen babies were ultimately born this way, according to the team. (Side note: In their paper, the authors describe potential resulting children as “three-parental individuals.”)

But two fetuses appeared to have genetic abnormalities. And one of the children started to show signs of a developmental disorder. In 2002, the US Food and Drug Administration put a stop to the research.

The babies born during that study are in their 20s now. But scientists still don’t know why they saw those abnormalities. Some think that mixing mtDNA from two people might be problematic.

Newer approaches to three-person IVF aim to include mtDNA from just the donor, completely bypassing the intended mother’s mtDNA. John Zhang at the New Hope Fertility Center in New York City tried this approach for a Jordanian couple in 2016. The woman carried genes for a fatal mitochondrial disease and had already lost two children to it. She wanted to avoid passing it on to another child.

Zhang took the nucleus of the woman’s egg and inserted it into a donor egg that had had its own nucleus removed—but still had its mitochondria-containing cytoplasm. That egg was then fertilized with the woman’s husband’s sperm.

Because it was still illegal in the US, Zhang controversially did the procedure in Mexico, where, as he told me at the time, “there are no rules.” The couple eventually welcomed a healthy baby boy. Less than 1% of the boy’s mitochondria carried his mother’s mutation, so the procedure was deemed a success.

There was a fair bit of outrage from the scientific community, though. Mitochondrial donation had been made legal in the UK the previous year, but no clinic had yet been given a license to do it. Zhang’s experiment seemed to have been conducted with no oversight. Many questioned how ethical it was, although Sian Harding, who reviewed the ethics of the UK procedure, then told me it was “as good as or better than what we’ll do in the UK.”

The scandal had barely died down by the time the next “three-person IVF” babies were announced. In 2017, a team at the Nadiya Clinic in Ukraine announced the birth of a little girl to parents who’d had the treatment for infertility. The news brought more outrage from some quarters, as scientists argued that the experimental procedure should only be used to prevent severe mitochondrial diseases.

It wasn’t until later that year that the UK’s fertility authority granted a team in Newcastle a license to perform mitochondrial donation. That team launched a trial in 2017. It was big news—the first “official” trial to test whether the approach could safely prevent mitochondrial disease.

But it was slow going. And meanwhile, other teams were making progress. The Nadiya Clinic continued to trial the procedure in couples with infertility. Pavlo Mazur, a former embryologist who worked at that clinic, tells me that 10 babies were born there as a result of mitochondrial donation.

Mazur then moved to another clinic in Ukraine, where he says he used a different type of mitochondrial donation to achieve another five healthy births for people with infertility. “In total, it’s 15 kids made by me,” he says.

But he adds that other clinics in Ukraine are also using mitochondrial donation, without sharing their results. “We don’t know the actual number of those kids in Ukraine,” says Mazur. “But there are dozens of them.”

In 2020, Nuno Costa-Borges of Embryotools in Barcelona, Spain, and his colleagues described another trial of mitochondrial donation. This trial, performed in Greece, was also designed to test the procedure for people with infertility. It involved 25 patients. So far, seven children have been born. “I think it’s a bit strange that they aren’t getting more credit,” says Heidi Mertes, a medical ethicist at Ghent University in Belgium.

The newly announced UK births are only the latest “three-person IVF” babies. And while their births are being heralded as a success story for mitochondrial donation, the story isn’t quite so simple. Three of the eight babies were born with a non-insignificant proportion of mutated mitochondria, ranging between 5% and 20%, depending on the baby and the sample.

Dagan Wells of the University of Oxford, who is involved in the Greece trial, says that two of the seven babies in their study also appear to have inherited mtDNA from their intended mothers. Mazur says he has seen several cases of this “reversal” too.

This isn’t a problem for babies whose mothers don’t carry genes for mitochondrial disease. But it might be for those whose mothers do.

I don’t want to pour cold water over the new UK results. It was great to finally see the results of a trial that’s been running for eight years. And the births of healthy babies are something to celebrate. But it’s not a simple success story. Mitochondrial donation doesn’t guarantee a healthy baby. We still have more to learn, not only from these babies, but from the others that have already been born.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Why It’s Okay To Not Buy Or Obsess Over Links Anymore via @sejournal, @martinibuster

There are many businesses relatively new to SEO that eventually face the decision to build or buy links because they are told that links are important, which, of course, links are important. But the need to buy links presupposes that buying them is the only way to acquire them. Links are important, but less important than at any time in the history of SEO.

How Do I Know So Much About Links?

I have been doing SEO for 25 years, at one time specializing in links. I did more than links, but I was typecast as a “links guy” because I was the moderator of the Link Building Forum at WebmasterWorld under the martinibuster nickname. WebmasterWorld was at one time the most popular source of SEO information in the world. Being a WebmasterWorld moderator was an honor, and only the best of the very best were invited to become one. Many top old-school SEOs were moderators there, like Jennifer Slegg, Greg Boser, Todd Friesen, Dixon Jones, Ash Nallawalla, and many more.

That’s not to brag, but to explain that my opinion comes from decades-long experience starting from the very dawn of link building. There are very few people who have as deep hands-on experience with links. So this is my advice based on my experience.

Short History Of Link Building

Google’s link algorithms have steadily improved since the early days. As early as 2003, I was told by Google engineer Marissa Mayer (then at Google, before becoming CEO of Yahoo) that Google was able to distinguish that a link in the footer was a “built by” link and to not count it for PageRank. This crushed sites that relied on footer links to power their rankings.

  • 2005 – Statistical Analysis
    In 2005, Google engineers announced at the Pubcon New Orleans search conference that they were using statistical analysis to catch unnatural linking patterns. Their presentation featured graphs showing a curve representing normal linking patterns and then a separate cloud of red dots that represented unnatural links.
  • Links That “Look” Natural
    If you’ve ever read the phrase “links that look natural” or “natural-looking links” and wondered where that came from, statistical analysis algorithms is the answer. After 2005, the goal for manipulative links was to look natural, which meant doing things like alternating the anchor text, putting links into context, and being careful about outbound link targets.
  • Demise Of Easy Link Tactics
    By 2006, Google had neutralized the business of reciprocal links, traffic counter link building, and was winding down the business of link directories.
  • WordPress Was Good For Link Building
    WordPress was a boon to link builders because it made it possible for more people to get online and build websites, increasing the ability to obtain links by asking or throwing money at them. There were also sites like Geocities that hosted mini-sites, but most of the focus was on standalone sites, maybe because of PageRank considerations (PageRank was visible in the Google Toolbar).
  • Rise Of Paid Links
    Seemingly everyone built websites on virtually any topic, which made link building easier to do simply by asking for a link. Companies like Text-Link-Ads came along and built huge networks of thousands of independent websites on virtually every topic, and they made a ton of money. I knew some people who sold links from their network of sites who were earning $40,000/month in passive income. White hat SEOs celebrated link selling because they said it was legitimate advertising (wink, wink), and therefore Google wouldn’t penalize it.
  • Fall Of Paid Links
    The paid links party ended in the years leading up to 2012, when paid links began losing their effectiveness. As a link building moderator, I had access to confidential information and was told by insiders that paid links were having less and less effect. Then 2012’s Penguin Update happened, and suddenly thousands of websites got hit by manual actions for paid links and guest posting links.

Ranking Where You’re Supposed To Rank

The Penguin Algorithm marked a turning point in the business of building links. Internally at Google there must have been a conversation about the punitive aspect of catching links and at some point not long after Google started ranking sites where they were supposed to rank instead of penalizing them.

In fact, I coined the phrase “ranking where you’re supposed to rank” in 2014 to show that while sites with difficulty ranking may not technically have a penalty, their links are ineffective and they are ranking where they are supposed to rank.

There’s a class of link sellers that sell what they call Private Blog Network links. PBN sellers depend on Google to not penalize a site and depend on Google to give a site a temporary boost which happens for many links. But the sites inevitably return to ranking where they’re supposed to rank.

Ranking poorly is not a big deal for churn and burn affiliate sites designed to rank high for a short period of time. But it’s a big deal for businesses that depend on a website to be ranking well every day.

Consequences Of Poor SEO

Receiving a manual action is a big deal because it takes a website out of action until Google restores the rankings. Recovering from a manual action is difficult and requires a site to go above and beyond by removing every single low-quality link they are responsible for, and sometimes more than that. Publishers are often disappointed after a manual action is lifted because their sites don’t return to their former high rankings. That’s because they’re ranking where they’re supposed to rank.

For that reason, buying links is not an option for B2B sites, personal injury websites, big-brand websites, or any other businesses that depend on rankings. An SEO or business owner will have to answer for a catastrophic loss in traffic and earnings should their dabbling in paid links backfire.

Personal injury SEO is a good example of why relying on links can be risky. It’s a subset of local search, where rankings are determined by local search algorithms. While links may help, the algorithm is influenced by other factors like local citations, which are known to have a strong impact on rankings. Even if a site avoids a penalty, links alone won’t carry it, and the best-case scenario is that the site ends up ranking where it’s supposed to rank. The worst-case scenario is a manual action for manipulative links.

I’ve assisted businesses with their reconsideration requests to get out of a manual action, and it’s a major hassle. In the old days, I could just send an email to someone at Google or Yahoo and get the penalty lifted relatively quickly. Getting out of a manual action today is not easy. It’s a big, big deal.

The point is that if the consequences of a poor SEO strategy are catastrophic, then buying links is not an option.

Promotion Is A Good Strategy

Businesses can still promote their websites without depending heavily on links. SEOs tend to narrow their views of promotion to just links. Link builders will turn down an opportunity to publish an article for distribution to tens of thousands of potential customers because the article is in an email or a PDF and doesn’t come with a link on a web page.

How dumb is that, right? That’s what thinking in the narrow terms of SEO does: it causes people to avoid promoting a site in a way that builds awareness in customers—the people who may be interested in a business. Creating awareness and building love for a business is the kind of thing that, in my opinion, leads to those mysterious external signals of trustworthiness that Google looks for.

Promotion is super important, and it’s not the kind of thing that fits into the narrow “get links” mindset. Any promotional activity a business undertakes outside the narrow SEO paradigm is going to go right over the head of the competition. Rather than obsessing over links, it may be a turning point for all businesses to return to thinking of ways to promote the site, because links are less important today than they ever have been, while external signals of trust, expertise, and authoritativeness are quite likely more important today than at any other time in SEO history.

Takeaways

  • Link Building’s Declining Value:
    Links are still important, but less so than in the past; their influence on rankings has steadily decreased.
  • Google’s Increasingly Sophisticated Link Algorithms:
    Google has increasingly neutralized manipulative link strategies through algorithm updates and statistical detection methods.
  • Rise and Fall of Paid Link Schemes:
    Paid link networks once thrived but became increasingly ineffective by 2012, culminating in penalties via the Penguin update.
  • Ranking Where You’re Supposed to Rank:
    Google now largely down-ranks or ignores manipulative links, meaning sites rank based on actual quality and relevance. Sites can still face manual actions, so don’t depend on Google continuing to down-rank manipulative links.
  • Risks of Link Buying:
    Manual actions are difficult to recover from and can devastate sites that rely on rankings for revenue.
  • Local SEO Factors Rely Less On Links:
    For industries like personal injury law, local ranking signals (e.g., citations) often outweigh link impact.
  • Promotion Beyond Links:
    Real promotion builds brand awareness and credibility, often in ways that don’t involve links but may influence user behavior signals. External user behavior signals have been a part of Google’s signals since the very first PageRank algorithm, which itself models user behavior.

Learn more about Google’s external user behavior signals and ranking without links:

Google’s Quality Rankings May Rely On These Content Signals

Featured Image by Shutterstock/Luis Molinero

Google Discusses If It’s Okay To Make Changes For SEO Purposes via @sejournal, @martinibuster

Google’s John Mueller and Martin Splitt discussed making changes to a web page, observing the SEO effect, and the importance of tracking those changes. There has been long-standing hesitation around making too many SEO changes because of a patent filed years ago about monitoring frequent SEO updates to catch attempts to manipulate search results, so Mueller’s answer to this question is meaningful in the context of what’s considered safe.

Does this mean it’s okay now to keep making changes until the site ranks well? Yes, no, and probably. The issue was discussed on a recent Search Off the Record podcast.

Is It Okay To Make Content Changes For SEO Testing?

The context of the discussion was a hypothetical small business owner who has a website and doesn’t really know much about SEO. The situation is that they want to try something out to see if it will bring more customers.

Martin Splitt set up the discussion as the business owner asking different people for their opinions on how to update a web page but receiving different answers. Splitt then asks whether going ahead and changing the page is safe to do.

Martin asked:

“And I want to try something out. Can I just do that or do I hurt my website when I just try things out?”

Mueller affirmed that it’s okay to get ahead and try things out, commenting that most content management systems (CMS) enable a user to easily make changes to the content.

He responded:

“…for the most part you can just try things out. One of the nice parts about websites is, often, if you’re using a CMS, you can just edit the page and it’s live, and it’s done. It’s not that you have to do some big, elaborate …work to put it live.”

In the old days, Google used to update its index once a month. So SEOs would make their web page changes and then wait for the monthly update to see if those changes had an impact. Nowadays, Google’s index is essentially on a rolling update, responding to new content as it gets indexed and processed, with SERPs being re-ranked in reaction to changes, including user trends where something becomes newsworthy or seasonal (that’s where the freshness algorithm kicks in).

Making changes to a small site that doesn’t have much traffic is an easy thing. Making changes to a website responsible for the livelihood of dozens, scores, or even hundreds of people is a scary thing. So when it comes to testing, you really need to balance the benefits against the possibility that a change might set off a catastrophic chain of events.

Monitoring The SEO Effect

Mueller and Splitt next talked about being prepared to monitor the changes.

Mueller continued his answer:

“It’s very easy to try things out, let it sit for a couple of weeks, see what happens and kind of monitor to see is it doing what you want it to be doing. I guess, at that point, when we talk about monitoring, you probably need to make sure that you have the various things installed so that you actually see what is happening.

Perhaps set up Search Console for your website so that you see the searches that people are doing. And, of course, some way to measure the goal that you want, which could be something perhaps in Analytics or perhaps there’s, I don’t know, some other way that you track in person if you have a physical store, like are people actually coming to my business after seeing my website, because it’s all well and good to do SEO, but if you have no way of understanding has it even changed anything, you don’t even know if you’re on the right track or recognize if something is going wrong.”

Something that Mueller didn’t mention is the impact on user behavior on a web page. Does the updated content make people scroll less? Does it make them click on the wrong thing? Do people bounce out at a specific part of the web page?

That’s the kind of data Google Analytics does not provide because that’s not what it’s for. But you can get that data with a free Microsoft Clarity account. Clarity is a user behavior analytics SaaS app. It shows you where (anonymized) users are on a page and what they do. It’s an incredible window on web page effectiveness.

Martin Splitt responded:

“Yeah, that’s true. Okay, so I need a way of measuring the impact of my changes. I don’t know, if I make a new website version and I have different texts and different images and everything is different, will I immediately see things change in Search Console or will that take some time?”

Mueller responded that the amount of time it takes for changes to show up in Search Console depends on how big the site is and the scale of the changes.

Mueller shared:

“…if you’re talking about something like a homepage, maybe one or two other pages, then probably within a week or two, you should see that reflected in Search. You can search for yourself initially.

That’s not forbidden to search for yourself. It’s not that something will go wrong or anything. Searching for your site and seeing, whatever change that you made, has that been reflected. Things like, if you change the title to include some more information, you can see fairly quickly if that got picked up or not.”

When Website Changes Go Wrong

Martin next talks about what I mentioned earlier: when a change goes wrong. He makes the distinction between a technical change and changes for users. A technical change can be tested on a staging site, which is a sandboxed version of the website that search engines or users don’t see. This is actually a pretty good thing to do before updating WordPress plugins or doing something big like swapping out the template. A staging site enables you to test technical changes to make sure there’s nothing wrong. Giving the staged site a crawl with Screaming Frog to check for broken links or other misconfigurations is a good idea.

Mueller said that changes for SEO can’t be tested on a staged site, which means that whatever changes are made, you have to be prepared for the consequences.

Listen to The Search Off The Record from about the 24 minute mark:

Featured Image by Shutterstock/Luis Molinero

A major AI training data set contains millions of examples of personal data

Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.

Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. The study that details the breach was published on arXiv earlier this month.

The bottom line, says William Agnew, a postdoctoral fellow in AI ethics at Carnegie Mellon University and one of the coauthors, is that “anything you put online can [be] and probably has been scraped.”

The researchers found thousands of instances of validated identity documents—including images of credit cards, driver’s licenses, passports, and birth certificates—as well as over 800 validated job application documents (including résumés and cover letters), which were confirmed through LinkedIn and other web searches as being associated with real people. (In many more cases, the researchers did not have time to validate the documents or were unable to because of issues like image clarity.) 

A number of the résumés disclosed sensitive information including disability status, the results of background checks, birth dates and birthplaces of dependents, and race. When résumés were linked to people with online presences, researchers also found contact information, government identifiers, sociodemographic information, face photographs, home addresses, and the contact information of other people (like references).

Examples of identity-related documents found in CommonPool’s small-scale data set show a credit card, a Social Security number, and a driver’s license. For each sample, the type of URL site is shown at the top, the image in the middle, and the caption in quotes below. All personal information has been replaced, and text has been paraphrased to avoid direct quotations. Images have been redacted to show the presence of faces without identifying the individuals.
COURTESY OF THE RESEARCHERS

When it was released in 2023, DataComp CommonPool, with its 12.8 billion data samples, was the largest existing data set of publicly available image-text pairs, which are often used to train generative text-to-image models. While its curators said that CommonPool was intended for academic research, its license does not prohibit commercial use as well. 

CommonPool was created as a follow-up to the LAION-5B data set, which was used to train models including Stable Diffusion and Midjourney. It draws on the same data source: web scraping done by the nonprofit Common Crawl between 2014 and 2022. 

While commercial models often do not disclose what data sets they are trained on, the shared data sources of DataComp CommonPool and LAION-5B mean that the data sets are similar, and that the same personally identifiable information likely appears in LAION-5B, as well as in other downstream models trained on CommonPool data. CommonPool researchers did not respond to emailed questions.

And since DataComp CommonPool has been downloaded more than 2 million times over the past two years, it is likely that “there [are]many downstream models that are all trained on this exact data set,” says Rachel Hong, a PhD student in computer science at the University of Washington and the paper’s lead author. Those would duplicate similar privacy risks.

Good intentions are not enough

“You can assume that any large-scale web-scraped data always contains content that shouldn’t be there,” says Abeba Birhane, a cognitive scientist and tech ethicist who leads Trinity College Dublin’s AI Accountability Lab—whether it’s personally identifiable information (PII), child sexual abuse imagery, or hate speech (which Birhane’s own research into LAION-5B has found). 

Indeed, the curators of DataComp CommonPool were themselves aware it was likely that PII would appear in the data set and did take some measures to preserve privacy, including automatically detecting and blurring faces. But in their limited data set, Hong’s team found and validated over 800 faces that the algorithm had missed, and they estimated that overall, the algorithm had missed 102 million faces in the entire data set. On the other hand, they did not apply filters that could have recognized known PII character strings, like emails or Social Security numbers. 

“Filtering is extremely hard to do well,” says Agnew. “They would have had to make very significant advancements in PII detection and removal that they haven’t made public to be able to effectively filter this.”  

Examples of résumé documents and personal disclosures found in CommonPool’s small-scale data set. For each sample, the type of URL site is shown at the top, the image in the middle, and the caption in quotes below. All personal information has been replaced, and text has been paraphrased to avoid direct quotations. Images have been redacted to show the presence of faces without identifying the individuals. Image courtesy of the researchers.
COURTESY OF THE RESEARCHERS

There are other privacy issues that the face blurring doesn’t address. While the blurring filter is automatically applied, it is optional and can be removed. Additionally, the captions that often accompany the photos, as well as the photos’ metadata, often contain even more personal information, such as names and exact locations.

Another privacy mitigation measure comes from Hugging Face, a platform that distributes training data sets and hosts CommonPool, which integrates with a tool that theoretically allows people to search for and remove their own information from a data set. But as the researchers note in their paper, this would require people to know that their data is there to start with. When asked for comment, Florent Daudens of Hugging Face said that “maximizing the privacy of data subjects across the AI ecosystem takes a multilayered approach, which includes but is not limited to the widget mentioned,” and that the platform is “working with our community of users to move the needle in a more privacy-grounded direction.” 

In any case, just getting your data removed from one data set probably isn’t enough. “Even if someone finds out their data was used in a training data sets and … exercises their right to deletion, technically the law is unclear about what that means,”  says Tiffany Li, an associate professor of law at the University of San Francisco School of Law. “If the organization only deletes data from the training data sets—but does not delete or retrain the already trained model—then the harm will nonetheless be done.”

The bottom line, says Agnew, is that “if you web-scrape, you’re going to have private data in there. Even if you filter, you’re still going to have private data in there, just because of the scale of this. And that’s something that we [machine-learning researchers], as a field, really need to grapple with.”

Reconsidering consent

CommonPool was built on web data scraped between 2014 and 2022, meaning that many of the images likely date to before 2020, when ChatGPT was released. So even if it’s theoretically possible that some people consented to having their information publicly available to anyone on the web, they could not have consented to having their data used to train large AI models that did not yet exist.

And with web scrapers often scraping data from each other, an image that was originally uploaded by the owner to one specific location would often find its way into other image repositories. “I might upload something onto the internet, and then … a year or so later, [I] want to take it down, but then that [removal] doesn’t necessarily do anything anymore,” says Agnew.

The researchers also found numerous examples of children’s personal information, including depictions of birth certificates, passports, and health status, but in contexts suggesting that they had been shared for limited purposes.

“It really illuminates the original sin of AI systems built off public data—it’s extractive, misleading, and dangerous to people who have been using the internet with one framework of risk, never assuming it would all be hoovered up by a group trying to create an image generator,” says Ben Winters, the director of AI and privacy at the Consumer Federation of America.

Finding a policy that fits

Ultimately, the paper calls for the machine-learning community to rethink the common practice of indiscriminate web scraping and also lays out the possible violations of current privacy laws represented by the existence of PII in massive machine-learning data sets, as well as the limitations of those laws’ ability to protect privacy.

“We have the GDPR in Europe, we have the CCPA in California, but there’s still no federal data protection law in America, which also means that different Americans have different rights protections,” says Marietje Schaake, a Dutch lawmaker turned tech policy expert who currently serves as a fellow at Stanford’s Cyber Policy Center. 

Besides, these privacy laws apply to companies that meet certain criteria for size and other characteristics. They do not necessarily apply to researchers like those who were responsible for creating and curating DataComp CommonPool.

And even state laws that do address privacy, like California’s consumer privacy act, have carve-outs for “publicly available” information. Machine-learning researchers have long operated on the principle that if it’s available on the internet, then it is public and no longer private information, but Hong, Agnew, and their colleagues hope that their research challenges this assumption. 

“What we found is that ‘publicly available’ includes a lot of stuff that a lot of people might consider private—résumés, photos, credit card numbers, various IDs, news stories from when you were a child, your family blog. These are probably not things people want to just be used anywhere, for anything,” says Hong.  

Hopefully, Schaake says, this research “will raise alarm bells and create change.” 

This article previously misstated Tiffany Li’s affiliation. This has been fixed.