Kamala Harris should stand with tech workers, not their bosses

Tangled up in the contest to be the next US president, there is another battle brewing: Silicon Valley vs. Silicon Valley. In Donald Trump’s corner are venture capitalists like Marc Andreessen and Peter Thiel, along with executives like Elon Musk. In the other are execs like LinkedIn founder Reid Hoffman and SV Angel investing mogul Ron Conway, who are backing Kamala Harris. Democracy appears to be at stake, and the weapon of choice is cold hard cash. 

Yet as an elected board member of the Alphabet Workers Union, an affiliate of the Communications Workers of America, I urge Americans to take a step back and look critically at the picture in front of us. No matter who wins in November, Silicon Valley’s bosses are positioning themselves for victory. It’s a familiar hedge that goes back decades, but this time is different because over the past four years hundreds of thousands of tech workers have been clawing back power. Tech’s elite have long been the biggest winners in the US economy, and the movement to organize tech workers seeks to hold that elite accountable.

If the next president favors our bosses’ interests over our own, the consequences could be dire for all working people in this country and many others. We know how to fight back against a future Trump administration because we have been there before. What’s less clear is whether and to what extent we can count on a Harris administration to be our ally.

On stage at the Democratic National Convention, Vice President Harris vowed to center the concerns of working people over those of corporate America. If she stays committed to that path in the face of Silicon Valley’s well-funded opposition, she will find dedicated allies in tech workers. 

Massive layoffs and brutal union-busting have become routine across the tech industry in recent years, enacted by executives with ties to both sides of the aisle. And many of the biggest innovations coming out of Silicon Valley over the past decade have been distinctly targeted at cutting labor costs and skirting labor laws. This has triggered a race to the bottom that starts with “gigified” outsourcing and—if the bosses have their way—ends in replacing as much human labor as possible with generative AI. These cost-cutting actions affect not only tech workers’ paychecks but the safety and quality of tech products with massive user bases. 

Some execs are getting more comfortable publicly airing their anti-labor opinions. Recently, in an X Spaces conversation, Trump casually lauded Musk’s mass firing of workers as a way to deal with strikes. Earlier this year, Amazon CEO Andy Jassy violated federal labor law by arguing that workers would actually be “less empowered” if they unionized. On the automation front, executives of Nvidia, Duolingo, Klarna, Cisco, and IBM have recently made clear that they intend to use AI to replace human workers.

But in government and through grassroots campaigning, workers and labor advocates are fighting back. The Justice Department, the Federal Trade Commission, and the National Labor Relations Board under the Biden-Harris administration have been dogged in their pursuit of corporate overreach and labor violations by tech companies and the executives who run them. The DOJ has fought for fair hiring practices: the department fined Apple $25 million for hiring discrimination. Lina Khan’s FTC has attempted to ban noncompete agreements—a staple in tech companies’ at-will employment contracts, which have a chilling effect on workers’ ability to seek better pay and benefits.

Moreover, the agency has been consistently taking labor effects into account when evaluating mergers. This consideration moves beyond the tired consumer welfare standard and seeks to make sure that competition favors workers as well as consumers. And the NLRB has targeted outsourcing by more strictly enforcing a “joint employer” rule that makes it harder for companies to use subcontracting as a way to circumvent the minimum wage and other responsibilities.

On the ground, we workers have been simultaneously forming, joining, and strengthening unions to push conversation and action forward. The Campaign to Organize Digital Employees (CODE-CWA) has led the charge for the industry, organizing at companies ranging from Act Blue, the fundraising platform that supports many Democratic candidates, to blue-chip megacorp Microsoft. Our unions have filed petition after petition against employers, and the NLRB has tirelessly worked to enforce the laws our bosses violate, earning wins for labor across the board. In fact, the NLRB has been so successful that some tech companies—including Amazon and  SpaceX—are attempting to cut the board off at the knees, claiming that its long-standing role in administering labor relations is unconstitutional. 

For those of us accustomed to hard-fought progress and frequent setbacks for labor’s Davids under the thumb of corporate Goliaths, the last few years have been a true bright spot. And we are determined to keep fighting, and keep winning, with or without the support of the next president.

Will either candidate keep pushing forward for labor? The answer is not so clear. Monied tech interests are lining up on both sides to advocate for looser regulation. While pro-Trump venture capitalists Andreessen and Ben Horowitz cited euphemistic “bad government policies” as the number one threat to the tech industry, the Silicon Valley powers that be on Harris’s side haven’t exactly come out swinging for labor. In fact, Hoffman said that the FTC’s Khan is “waging war on American business” and urged Harris to fire her.

It’s not evident yet if Harris shares the views of her billionaire supporters, but she’s certainly chasing their money. A recent Harris campaign fundraiser in San Francisco bagged $13 million from a guest list replete with tech executives. And the vice president is reportedly courting tech bosses more directly, sending aides to meet with crypto leaders and venture capital firms. Her ties to the industry are long-standing and often personal; she’s known to be close with both former Facebook COO Sheryl Sandberg and Laurene Powell Jobs, and her brother-in-law is Uber’s chief legal officer

While Harris’s team has been having conversations and exploring options, it has not yet announced any economic agenda or approach to regulation, innovation, or labor. It’s savvy to get the money first without making public promises. But Harris should be trying to court our votes, too—not just our bosses’ financial support. In recent memory, workers in the tech industry have demonstrated progressive energy. While campaigning in 2020, Bernie Sanders proudly voiced solidarity with workers against their billionaire bosses. And tech workers turned out for him, donating more to Bernie than to any other presidential candidate during the primaries—close to twice as much as to Elizabeth Warren, the second-favorite candidate for the group. Harris could leverage that kind of power in November if she truly commits to the cause.

Now is the moment for Harris to step up and make a statement in support of workers, promising to continue, if not expand upon, the Biden-Harris approach to Big Tech. Some may remember that when she ran for president in 2020, Senator Harris sided with Uber drivers and against her brother-in-law’s interests during a fight about gig workers’ rights in California. Unions like ours—as well as any American who believes that fair labor practices are essential to a functioning democracy—can continue to apply pressure on Harris and her team to take a strong stand for worker rights and protections. Indeed, the United Auto Workers (UAW) filed federal labor charges against Trump and Musk after those careless comments at the Spaces event, whereas President Biden walked a picket line with striking auto workers. Voices like theirs and ours—the voices of the hundreds of thousands of workers we represent—will continue to be raised. If we aren’t heard, we will get louder.

The stakes in November are high, and the only truly democratic future is one with fair wages, worker protections, and shared abundance. Tech elites stand in united opposition to such a future and are actively developing the AI tools to undermine it. Tech workers will continue to expand our collective power to fight those elites. The only open question is whether the next administration will be on our side or theirs. 

Stephen McMurtry is a Google Software Engineer and Communications Chair of the Alphabet Workers Union-CWA

A new way to build neural networks could make AI more understandable

A tweak to the way artificial neurons work in neural networks could make AIs easier to decipher.

Artificial neurons—the fundamental building blocks of deep neural networks—have survived almost unchanged for decades. While these networks give modern artificial intelligence its power, they are also inscrutable. 

Existing artificial neurons, used in large language models like GPT4, work by taking in a large number of inputs, adding them together, and converting the sum into an output using another mathematical operation inside the neuron. Combinations of such neurons make up neural networks, and their combined workings can be difficult to decode.

But the new way to combine neurons works a little differently. Some of the complexity of the existing neurons is both simplified and moved outside the neurons. Inside, the new neurons simply sum up their inputs and produce an output, without the need for the extra hidden operation. Networks of such neurons are called Kolmogorov-Arnold Networks (KANs), after the Russian mathematicians who inspired them.

The simplification, studied in detail by a group led by researchers at MIT, could make it easier to understand why neural networks produce certain outputs, help verify their decisions, and even probe for bias. Preliminary evidence also suggests that as KANs are made bigger, their accuracy increases faster than networks built of traditional neurons.

“It’s interesting work,” says Andrew Wilson, who studies the foundations of machine learning at New York University. “It’s nice that people are trying to fundamentally rethink the design of these [networks].”

The basic elements of KANs were actually proposed in the 1990s, and researchers kept building simple versions of such networks. But the MIT-led team has taken the idea further, showing how to build and train bigger KANs, performing empirical tests on them, and analyzing some KANs to demonstrate how their problem-solving ability could be interpreted by humans. “We revitalized this idea,” said team member Ziming Liu, a PhD student in Max Tegmark’s lab at MIT. “And, hopefully, with the interpretability… we [may] no longer [have to] think neural networks are black boxes.”

While it’s still early days, the team’s work on KANs is attracting attention. GitHub pages have sprung up that show how to use KANs for myriad applications, such as image recognition and solving fluid dynamics problems. 

Finding the formula

The current advance came when Liu and colleagues at MIT, Caltech, and other institutes were trying to understand the inner workings of standard artificial neural networks. 

Today, almost all types of AI, including those used to build large language models and image recognition systems, include sub-networks known as a multilayer perceptron (MLP). In an MLP, artificial neurons are arranged in dense, interconnected “layers.” Each neuron has within it something called an “activation function”—a mathematical operation that takes in a bunch of inputs and transforms them in some pre-specified manner into an output. 

In an MLP, each artificial neuron receives inputs from all the neurons in the previous layer and multiplies each input with a corresponding “weight” (a number signifying the importance of that input). These weighted inputs are added together and fed to the activation function inside the neuron to generate an output, which is then passed on to neurons in the next layer. An MLP learns to distinguish between images of cats and dogs, for example, by choosing the correct values for the weights of the inputs for all the neurons. Crucially, the activation function is fixed and doesn’t change during training.

Once trained, all the neurons of an MLP and their connections taken together essentially act as another function that takes an input (say, tens of thousands of pixels in an image) and produces the desired output (say, 0 for cat and 1 for dog). Understanding what that function looks like, meaning its mathematical form, is an important part of being able to understand why it produces some output. For example, why does it tag someone as creditworthy given inputs about their financial status? But MLPs are black boxes. Reverse-engineering the network is nearly impossible for complex tasks such as image recognition.

And even when Liu and colleagues tried to reverse-engineer an MLP for simpler tasks that involved bespoke “synthetic” data, they struggled. 

“If we cannot even interpret these synthetic datasets from neural networks, then it’s hopeless to deal with real-world data sets,” says Liu. “We found it really hard to try to understand these neural networks. We wanted to change the architecture.”

Mapping the math

The main change was to remove the fixed activation function and introduce a much simpler learnable function to transform each incoming input before it enters the neuron. 

Unlike the activation function in an MLP neuron, which takes in numerous inputs, each simple function outside the KAN neuron takes in one number and spits out another number. Now, during training, instead of learning the individual weights, as happens in an MLP, the KAN just learns how to represent each simple function. In a paper posted this year on the preprint server ArXiv, Liu and colleagues showed that these simple functions outside the neurons are much easier to interpret, making it possible to reconstruct the mathematical form of the function being learned by the entire KAN.

The team, however, has only tested the interpretability of KANs on simple, synthetic data sets, not on real-world problems, such as image recognition, which are more complicated. “[We are] slowly pushing the boundary,” says Liu. “Interpretability can be a very challenging task.”

Liu and colleagues have also shown that KANs get more accurate at their tasks with increasing size faster than MLPs do. The team proved the result theoretically and showed it empirically for science-related tasks (such as learning to approximate functions relevant to physics). “It’s still unclear whether this observation will extend to standard machine learning tasks, but at least for science-related tasks, it seems promising,” Liu says.

Liu acknowledges that KANs come with one important downside: it takes more time and compute power to train a KAN, compared to an MLP.

“This limits the application efficiency of KANs on large-scale data sets and complex tasks,” says Di Zhang, of Xi’an Jiaotong-Liverpool University in Suzhou, China. But he suggests that more efficient algorithms and hardware accelerators could help.

Anil Ananthaswamy is a science journalist and author who writes about physics, computational neuroscience, and machine learning. His new book, WHY MACHINES LEARN: The Elegant Math Behind Modern AI, was published by Dutton (Penguin Random House US) in July.

WordPress Insiders Discuss WordPress Stagnation via @sejournal, @martinibuster

A recent webinar featuring WordPress executives from Automattic and Elementor, along with developers and Joost de Valk, discussed the stagnation in WordPress growth, exploring the causes and potential solutions.

Stagnation Was The Webinar Topic

The webinar, “Is WordPress’ Market share Declining? And What Should Product Businesses Do About it?” was a frank discussion about what can be done to increase the market share of new users that are choosing a web publishing platform.

Yet something that came up is that there are some areas that WordPress is doing exceptionally well so it’s not all doom and gloom. As will be seen later on, the fact that the WordPress core isn’t progressing in terms of specific technological adoption isn’t necessarily a sign that WordPress is falling behind, it’s actually a feature.

Yet there is a stagnation as mentioned at the 17:07 minute mark:

“…Basically you’re saying it’s not necessarily declining, but it’s not increasing and the energy is lagging. “

The response to the above statement acknowledged that while there are areas of growth like in the education and government sectors, the rest was “up for grabs.”

Joost de Valk spoke directly and unambiguously acknowledged the stagnation at the 18:09 minute mark:

“I agree with Noel. I think it’s stagnant.”

That said, Joost also saw opportunities with ecommerce, with the performance of WooCommerce. WooCommerce, by the way, outperformed WordPress as a whole with a 6.80% year over year growth rate, so there’s a good reason that Joost was optimistic of the ecommerce sector.

A general sense that WordPress was entering a stall however was not in dispute, as shown in remarks at the 31:45 minute mark:

“… the WordPress product market share is not decreasing, but it is stagnating…”

Facing Reality Is Productive

Humans have two ways to deal with a problem:

  1. Acknowledge the problem and seek solutions
  2. Pretend it’s not there and proceed as if everything is okay

WordPress is a publishing platform that’s loved around the world and has literally created countless jobs, careers, powered online commerce as well as helped establish new industries in developing applications that extend WordPress.

Many people have a stake in WordPress’ continued survival so any talk about WordPress entering a stall and descent phase like an airplane that reached the maximum altitude is frightening and some people would prefer to shout it down to make it go away.

But facts cannot be brushed aside and that’s what this podcast tried to do. Everyone in the discussion has a stake in the continued growth of WordPress and their goal was not malign WordPress but discuss the current situation, identify what it is and try to reach an understanding of ways to solve the problem.

The live webinar featured:

  • Miriam Schwab, Elementor’s Head of WP Relations
  • Rich Tabor, Automattic Product Manager
  • Joost de Valk, founder of Yoast SEO
  • Co-hosts Matt Cromwell and Amber Hinds, both members of the WordPress developer community moderated the discussion.

WordPress Market Share Stagnation

The webinar acknowledged that WordPress market share, the percentage of websites online that use WordPress, was stagnating. Stagnation is a state at which something is neither moving forward nor backwards, it is simply stuck at an in between point. And that’s what was openly acknowledged and the main point of the discussion was understanding the reasons why and what could be done about it.

Statistics gathered by the HTTPArchive and published on Joost de Valk’s blog show that WordPress experienced a year over year growth of 1.85%, having spent the year growing and contracting its market share. For example, over the latest month over month period the market share dropped by -0.28%.

Crowing about the WordPress 1.85% growth rate as evidence that everything is fine is to ignore that a large percentage of new businesses and websites coming online are increasingly going to other platforms, with year over year growth rates of other platforms outpacing the rate of growth of WordPress.

Out of the top 10 Content Management Systems, only six experienced year over year (YoY) growth.

CMS YoY Growth

  1. Webflow: 25.00%
  2. Shopify: 15.61%
  3. Wix: 10.71%
  4. Squarespace: 9.04%
  5. Duda: 8.89%
  6. WordPress: 1.85%

Why Stagnation Is A Problem

An important point made in the webinar is that stagnation can have a negative trickle-down effect on the business ecosystem by reducing growth opportunities and customer acquisition. If fewer of the new businesses coming online are opting in for WordPress are clients that will never come looking for a theme, plugin, development or SEO service.

It was noted at the 4:18 minute mark by Joost de Valk:

“…when you’re investing and when you’re building a product in the WordPress space, the market share or whether WordPress is growing or not has a deep impact on how easy it is to well to get people to, to buy the software that you want to sell them.”

Perception Of Innovation

One of the potential reasons for the struggle to achieve significant growth is the perception of a lack of innovation, pointed out at the 16:51 minute mark that there’s still no integration with popular technologies like Next JS, an open-source web development platform that is optimized for fast rollout of scalable and search-friendly websites.

It was observed at the 16:51 minute mark:

“…and still today we have no integration with next JS or anything like that…”

Someone else agreed but also expressed at the 41:52 minute mark, that the lack of innovation in the WordPress core can also be seen as a deliberate effort to make WordPress extensible so that if users find a gap a developer can step in and make a plugin to make WordPress be whatever users and developers want it to be.

“It’s not trying to be everything for everyone because it’s extensible. So if WordPress has a… let’s say a weakness for a particular segment or could be doing better in some way. Then you can come along and develop a plug in for it and that is one of the beautiful things about WordPress.”

Is Improved Marketing A Solution

One of the things that was identified as an area of improvement is marketing. They didn’t say it would solve all problems. It was simply noted that competitors are actively advertising and promoting but WordPress is by comparison not really proactively there. I think to extend that idea, which wasn’t expressed in the webinar, is to consider that if WordPress isn’t out there putting out a positive marketing message then the only thing consumers might be exposed to is the daily news of another vulnerability.

Someone commented in the 16:21 minute mark:

“I’m missing the excitement of WordPress and I’m not feeling that in the market. …I think a lot of that is around the product marketing and how we repackage WordPress for certain verticals because this one-size-fits-all means that in every single vertical we’re being displaced by campaigns that have paid or, you know, have received a a certain amount of funding and can go after us, right?”

This idea of marketing being a shortcoming of WordPress was raised earlier in the webinar at the 18:27 minute mark where it was acknowledged that growth was in some respects driven by the WordPress ecosystem with associated products like Elementor driving the growth in adoption of WordPress by new businesses.

They said:

“…the only logical conclusion is that the fact that marketing of WordPress itself is has actually always been a pain point, is now starting to actually hurt us.”

Future Of WordPress

This webinar is important because it features the voices of people who are actively involved at every level of WordPress, from development, marketing, accessibility, WordPress security, to plugin development. These are insiders with a deep interest in the continued evolution of WordPress as a viable platform for getting online.

The fact that they’re talking about the stagnation of WordPress should be of concern to everybody and that they are talking about solutions shows that the WordPress community is not in denial but is directly confronting situations, which is how a thriving ecosystem should be responding.

Watch the webinar:

Is WordPress’ Market share Declining? And What Should Product Businesses Do About it?

Featured Image by Shutterstock/Krakenimages.com

Vulnerabilities in Two ThemeForest WordPress Themes, 500k+ Sold via @sejournal, @martinibuster

A vulnerability advisory was issued about two WordPress themes found on ThemeForest that could allow a hacker to delete arbitrary files and inject malicious scripts into a website.

Two WordPress Themes Sold On ThemeForest

The two WordPress themes with vulnerabilities are sold on ThemeForest and together they have over a half million sales.

The two themes are:

  • Betheme theme for WordPress (306,362 sales)
  • The Enfold – Responsive Multi-Purpose Theme for WordPress (260,607 sales)

Betheme Theme for WordPress Vulnerability

Wordfence issued an advisory that The Betheme theme contained a PHP Object Injection vulnerability that was rated as a high threat.

Wordfence was discreet in their description of the vulnerability and offered no details of the specific flaw. However, in the context of a WordPress theme, a PHP Object Injection vulnerability usually arises when a user input is not properly filtered (sanitized) for unwanted uploads and inputs.

This is how Wordfence described it:

“The Betheme theme for WordPress is vulnerable to PHP Object Injection in all versions up to, and including, 27.5.6 via deserialization of untrusted input of the ‘mfn-page-items’ post meta value. This makes it possible for authenticated attackers, with contributor-level access and above, to inject a PHP Object. No known POP chain is present in the vulnerable plugin.

If a POP chain is present via an additional plugin or theme installed on the target system, it could allow the attacker to delete arbitrary files, retrieve sensitive data, or execute code.”

Has Betheme Theme Been Patched?

Betheme Theme for WordPress has received a patch on August 30, 2024. But Wordfence’s advisory isn’t acknowledging it. It’s possible that the advisory needs to be updated, not sure. Nevertheless, it’s recommended that users of the Enfold theme consider updating their theme to the newest version, which is Version 27.5.7.1.

The Enfold – Responsive Multi-Purpose Theme for WordPress

The Enfold Responsive Multi-Purpose WordPress theme contains a different flaw and was given a lower severity rating of 6.4. That said, the publisher of the theme has not issued a fix for the vulnerability.

A Stored Cross-Site Scripting (XSS) was discovered in the WordPress theme from a flaw originating in a failure to sanitize inputs.

Wordfence describes the vulnerability:

“The Enfold – Responsive Multi-Purpose Theme theme for WordPress is vulnerable to Stored Cross-Site Scripting via the ‘wrapper_class’ and ‘class’ parameters in all versions up to, and including, 6.0.3 due to insufficient input sanitization and output escaping. This makes it possible for authenticated attackers, with Contributor-level access and above, to inject arbitrary web scripts in pages that will execute whenever a user accesses an injected page.”

Enfold Vulnerability Has Not Been Patched

The Enfold – Responsive Multi-Purpose Theme for WordPress has not been patched as of this writing and remains vulnerable. The changelog documenting the updates to the theme shows that it was last updated in August 19, 2024.

Screenshot Of Enfold WordPress Theme’s Changelog

The Enfold – Responsive Multi-Purpose Theme for WordPress has not been patched as of this writing and remains vulnerable.

Wordfence’s advisory warned:

“No known patch available. Please review the vulnerability’s details in depth and employ mitigations based on your organization’s risk tolerance. It may be best to uninstall the affected software and find a replacement.”

Read the advisories:

Betheme <= 27.5.6 – Authenticated (Contributor+) PHP Object Injection

Enfold <= 6.0.3 – Authenticated (Contributor+) Stored Cross-Site Scripting via wrapper_class and class Parameters

Ask an Expert: Should Merchants Sell on TikTok Shop?

“Ask an Expert” is an occasional feature where we pose questions to seasoned ecommerce pros. For this installment, we’ve turned to Charles Nicholls, a serial SaaS ecommerce entrepreneur, most recently of SimplicityDX, a customer acquisition platform, and a Practical Ecommerce contributor.

He addresses the viability of selling on TikTok Shops.

Practical Ecommerce: Should merchants consider TikTok Shop despite the political uncertainties?

Charles Nicholls: When evaluating TikTok Shop, merchants should first decide whether to sell on social channels at all.

Charles Nicholls

Charles Nicholls

Social platforms such as TikTok Shop, Instagram Shop, and others function similarly to marketplaces such as Amazon, but they come with trade-offs. They act as the merchant of record, controlling customer data, the sales process, and returns. The channels can generate significant sales volume, but often at the expense of margins and relationships. Unlike direct sales from an ecommerce site, selling on social platforms limits the ability to know and engage with customers for repeat sales.

Social commerce is becoming increasingly influential. For many brands, social media is now the starting point for over half of sales. Yet measuring its impact remains challenging. Traditional tools such as Google Analytics 4 often underestimate social media’s influence. Only customer surveys typically reveal the actual effect.

Merchants usually approach social commerce in one of two ways, both centered on the start of the buying journey. The key decision is where the sale occurs. If the business model relies on long-term customer relationships and repeat sales, directing traffic to the ecommerce site is essential. Conversely, if it’s gaining market share, brand visibility, and volume sales, then selling on social platforms could help.

TikTok Shop offers unique opportunities and challenges. The algorithm prioritizes viral, engaging content, favoring skilled creators who can rapidly drive product awareness and sales among their followers. Hence merchants contemplating TikTok should factor in working with those influencers.

TikTok shoppers are highly price-sensitive and respond to discounts and impulse buys. Thus sellers with low-cost entry-level products, such as beauty, can build a brand on that platform.

Yet a critical component is consumer preference. Upwards of 75% of online shoppers prefer purchasing directly from brands rather than through influencers or marketplaces. Consumers value brands’ authenticity and trust.

Should merchants sell on TikTok Shop? The decision boils down to priorities. The benefits are visibility and sales volume. The downsides are lower margins and diminished customer relationships.

Google’s New Support For AVIF Images May Boost SEO via @sejournal, @martinibuster

Google announced that images in the AVIF file format will now be eligible to be shown in Google Search and Google Images, including all platforms that surface Google Search data. AVIF will dramatically lower image sizes and improve Core Web Vitals scores, particularly Largest Contentful Paint.

How AVIF Can Improve SEO

Getting pages crawled and indexed are the first step of effective SEO. Anything that lowers file size and speeds up web page rendering will help search crawlers get to the content faster and improve the amount of pages crawled.

Google’s crawl budget documentation recommends increasing the speeds of page loading and rendering as a way to avoid receiving “Hostload exceeded” warnings.

It also says that faster loading times enables Googlebot to crawl more pages:

Improve your site’s crawl efficiency

Increase your page loading speed
Google’s crawling is limited by bandwidth, time, and availability of Googlebot instances. If your server responds to requests quicker, we might be able to crawl more pages on your site.

What Is AVIF?

AVIF (AVI Image File Format) is a next generation open source image file format that combines the best of JPEG, PNG, and GIF image file formats but in a more compressed format for smaller image files (by 50% for JPEG format). AVIF supports transparency like PNG and photographic images like JPEG does but does but with a higher level of dynamic range, deeper blacks, and better compression (meaning smaller file sizes). AVIF even supports animation like GIF does.

Is AVIF Supported?

AVIF is currently supported by Chrome, Edge, Firefox, Opera, and Safari browsers. Not all content management systems support AVIF. However, both WordPress and Joomla support AVIF. In terms of CDN, Cloudflare also already supports AVIF.

I couldn’t at this time ascertain whether Bing supports AVIF files and will update this article once I find out.

Current website usage of AVIF stands at 0.2% but now that it’s available to surfaced in Google Search, expect that percentage to grow. AVIF images will probably become a standard image format because of its high compression will help sites perform far better than they currently do with JPEG and PNG formats. https://w3techs.com/technologies/overview/image_format

AVIF Images Are Automatically Indexable By Google

According to Google’s announcement there is nothing special that needs to be done to make AVIF image files indexable.

“Over the recent years, AVIF has become one of the most commonly used image formats on the web. We’re happy to announce that AVIF is now a supported file type in Google Search, for Google Images as well as any place that uses images in Google Search. You don’t need to do anything special to have your AVIF files indexed by Google.”

Read Google’s announcement:

Supporting AVIF in Google Search

Featured Image by Shutterstock/Cast Of Thousands

5 SEO Insights About Outbound Links via @sejournal, @martinibuster

Outbound have traditionally been considered a ranking or relevance related factor but those ideas are outdated now that search engines use AI for spam detection and the ranking process. It’s time to consider new ways of thinking about outbound links.

1. A Page Is About Multiple Subtopics

One thing people worry about is whether linking out to pages that aren’t specifically about what the entire page is about is a good practice. The more important thing to think about is if the sentence or paragraph supports an irrelevant outbound link then the bigger problem is that the entire paragraph is off-topic and should be removed. Every outbound link should be relevant to the context where it originates and every context should be relevant within the overall context of the entire page.

A webpage is rarely ever about one topic. It’s usually about one topic and the related subtopics, whatever makes sense for the user.

  • Never link out because you think it will make the page more relevant for the topic or subtopic.
  • Always link out if it makes sense within the context.
  • If the content says that research proves X,Y, and Z then it makes sense to link out to a page about that research so that the user knows this is a fact.

A page that links out to other pages that are on related subtopics is fine.

2. Relevance Is Not Always About Keywords

In the context of outbound links, relevance could be said to be about how closely related a word, sentence, paragraph or webpage is to whatever is being linked to.

A more up to date definition of relevance is how closely the link aligns with the needs or expectations of the reader at the exact moment that an outbound link can satisfy those needs or expectations.

3. Poor Outbound Links May Impact Site Quality

Linking to low quality sites could cause Google to consider the linking site as also low quality. What a site links to may impact the quality of the site. But what’s a low quality site?

Check If The Site Is Created For Search Engines

The most current definition of a low quality site is one that is created to rank for search engines. That can be an affiliate site that’s created to rank for specific keyword phrases without any expertise, or anything to new or unique to add to what is already ranking for the topic.

Typical signs of site created to rank are keyword focused content (instead of reader-focused content), keyword focused titles, keyword focused headings, and virtually all the pages are about keywords with the highest level of query volume and the headings are an exact match for People Also Ask phrases, that kind of thing.

In a way, judging if a site is created for search engines can also be one of those “you know it when you see it” type judgments calls.

4. Quality Check All Outbound Links

One way to evaluate a site you’re considering linking to is to look at the sites that they are linking to. If it looks like they’re engaged in selling links then I would consider the entire site to be poisoned.

Link sellers are easy to spot. They typically link to three pages, two of them are to reputable websites and one of them is a low quality site that no sane person would link to.  Yes, it’s that easy to spot and yes they are naïve to believe they can mask their link selling by linking to two reputable sites.

The following image represents the linking patterns of spam sites and normal sites. Spam sites tend to link to other spam sites and to reputable sites. A reputable site never links to a spam site (unless they were tricked by a link builder). This is an insight discovered in a research paper about link spam detection that looked at the direction of links.

Diagram example showing how spam links tend to form communities outside of the link communities of normal pages.Spammy links and normal links tend to form communities with their linking patterns. While spammy pages may link to normal pages, normal pages rarely link to spammy pages. This creates a map of the Internet that makes it easier to find linking patterns between normal pages, while rejecting the spam links.

If the sites you link to have spammy outbound links, then maybe you should reconsider linking out to those sites. 

The point is that low quality sites link to normal sites. And normal sites don’t tend to link to low quality sites. This is the directional quality of outbound links which was discovered in 2007 as a way to unmask spam sites and help confirm normal sites by their outbound links (PDF on Archive.org). Even though that research paper is old, the insight about the directional quality of outbound links may still be pertinent today.

Google uses an AI system called SpamBrain to discover spammy links, so it’s not inconceivable that directionality of outbound links is one of many considerations for determining spammy sites and networks of spammy sites.

Google’s documentation says this about SpamBrain, the spam fighting AI:

“Links still help us discover and rank results in meaningful ways, and we made a lot of progress in 2021 to protect this core signal. We launched a link spam update to broadly identify unnatural links and prevent them from affecting search quality.”

And elsewhere this:

“SpamBrain is our AI-based spam-prevention system. Besides using it to detect spam directly, it can now detect both sites buying links, and sites used for the purpose of passing outgoing links.”

5. Linking To .Edu and .Gov Sites Makes No Difference

Linking out to .edu and .gov pages is ok as long as it meets the information needs of the reader at the moment they come across the link.

Some people believe that linking to .gov and .edu pages helps rankings. This idea has been around since the very early 2000s.

  • Googlers have consistently debunked the idea that that .gov and .edu pages have a special ranking benefit.
  • There is no patent or research that explicitly or implicitly says that sites with links from .edu and .gov sites are considered higher quality.
  • The entire idea is pure conjecture.

Outbound Links And Modern SEO

AI, neural networks and transformer based systems like BERT have changed how search engines detect site quality and links. This means that old practices related to outbound links should be reconsidered.

Featured Image by Shutterstock/eamesBot

Architecting cloud data resilience

Cloud has become a given for most organizations: according to PwC’s 2023 cloud business survey, 78% of companies have adopted cloud in most or all parts of the business. These companies have migrated on-premises systems to the cloud seeking faster time to market, greater scalability, cost savings, and improved collaboration.

Yet while cloud adoption is widespread, research by McKinsey shows that companies’ concerns around the resiliency and reliability of cloud operations, coupled with an ever-evolving regulatory environment, are limiting their ability to derive full value from the cloud. As the value of a business’s data grows ever clearer, the stakes of making sure that data is resilient are heightened. Business leaders now justly fear that they might run afoul of mounting data regulations and compliance requirements, that bad actors might target their data in a ransomware attack, or that an operational disruption affecting their data might grind the entire business to a halt.

For all its competitive advantages, moving to the cloud presents unique challenges for data resilience. In fact, the qualities of cloud that make it so appealing to businesses—scalability, flexibility, and the ability to handle rapidly changing data—are the same ones that make it challenging to ensure the resilience of mission-critical applications and their data in the cloud.

“A widely held misconception is that the durability of the cloud automatically protects your data,” says Rick Underwood, CEO of Clumio, a backup and recovery solutions provider. “But a multitude of factors in cloud environments can still reach your data and wipe it out, maliciously encrypt it, or corrupt it.”

Complicating matters is that moving data to the cloud can lead to reduced data visibility, as individual teams begin creating their own instances and IT teams may not be able to see and track all the organization’s data. “When you make copies of your data for all of these different cloud services, it’s very hard to keep track of where your critical information goes and what needs to be compliant,” says Underwood. The result, he adds, is a “Wild West in terms of identifying, monitoring, and gaining overall visibility into your data in the cloud. And if you can’t see your data, you can’t protect it.”

The end of traditional backup architecture

Until recently, many companies relied on traditional backup architectures to protect their data. But the inability of these backup systems to handle vast volumes of cloud data—and scale to accommodate explosive data growth—is becoming increasingly evident, particularly to cloud-native enterprises. In addition to issues of data volume, many traditional backup systems are ill-equipped to handle the sheer variety and rate of change of today’s enterprise data.

In the early days of cloud, Steven Bong, founder and CEO of AuditFile, had difficulty finding a backup solution that could meet his company’s needs. AuditFile supplies audit software for certified public accountants (CPAs) and needed to protect their critical and sensitive audit work papers. “We had to back up our data somehow,” he says. “Since there weren’t any elegant solutions commercially available, we had a home-grown solution. It was transferring data, backing it up from different buckets, different regions. It was fragile. We were doing it all manually, and that was taking up a lot of time.”

Frederick Gagle, vice president of technology for BioPlus Specialty Pharmacy, notes that backup architectures that weren’t designed for cloud don’t address the unique features and differences of cloud platforms. “A lot of backup solutions,” he says, “started off being on-prem, local data backup solutions. They made some changes so they could work in the cloud, but they weren’t really designed with the cloud in mind, so a lot of features and capabilities aren’t native.”

Underwood agrees, saying, “Companies need a solution that’s natively architected to handle and track millions of data operations per hour. The only way they can accomplish that is by using a cloud-native architecture.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

A new smart mask analyzes your breath to monitor your health

Your breath can give away a lot about you. Each exhalation contains all sorts of compounds, including possible biomarkers for disease or lung conditions, that could give doctors a valuable insight into your health.

Now a new smart mask, developed by a team at the California Institute of Technology, could help doctors check your breath for these signals continuously and in a noninvasive way. A patient could wear the mask at home, measure their own levels, and then go to the doctor if a flare-up is likely. 

“They don’t have to come to the clinic to assess their inflammation level,” says Wei Gao, professor of Medical Engineering at Caltech and one of the smart mask’s creators. “This can be lifesaving.”

The smart mask, details of which were published in Science today, uses a two-part cooling system to chill the breath of its wearer. The cooling turns the breath into exhaled breath condensate (EBC). 

EBC, essentially a liquid version of someone’s breath, is easier to analyze, because biomarkers like nitrite and alcohol content are more concentrated in a liquid than in a gas. The mask design takes inspiration from plants’ capillary abilities, using a series of microfluidic modules that create pressure to push the EBC fluid around to sensors in the mask.

The sensors are connected via Bluetooth to a device like a phone, where the patient has access to real-time health readings.

“The biggest challenge has always been collecting real-time samples. This problem has been solved. That’s a paradigm shift,” says Rajan Chakrabarty, professor of Environmental and Chemical Engineering at Washington University in St. Louis and who was not involved in the research.

The Caltech team tested the smart mask with patients, including several who had chronic obstructive pulmonary disease (COPD) or asthma or had just gotten over a covid-19 infection. They were testing the masks for comfort and breathability, but they also wanted to see if the masks actually worked at tracking useful biomarkers throughout a patient’s daily activities, such as exercise and work. 

The mask picked up on higher levels of nitrite in patients who had asthma or other conditions that involved inflamed airways. It also picked up on higher alcohol content after a patient went out drinking, which demonstrates another potential application of the mask. Analyzing breath this way is more accurate than the typical breathalyzer test, which involves a patient blowing into a device. Blowing can produce imprecise results due to alcohol in saliva being spit out.

The researchers hope this is just the beginning. They plan to test the masks on a larger population, and if all goes well, commercialize the masks to get them out to a wider audience. They hope the mask will be a platform for broader application, where sensors for a range of biomarkers could be slotted in and out. 

“What I would like to be able to do is take off their sensors, put in my sensors, and this becomes the building block for doing all other types of development,” says Albert Titus, professor and chair of the Department of Biomedical Engineering at the University at Buffalo and who wasn’t part of the Caltech team. “That’s where I’d like to see it go.”

For example, there may be the possibility to measure ketones in the breath, a high level of which is a sign of diabetes, or glucose levels, to help people with diabetes monitor their condition.

“The mask can be reconfigured for many different applications,” says Gao.

How machine learning is helping us probe the secret names of animals

Do animals have names? According to the poet T.S. Eliot, cats have three: the name their owner calls them (like George); a second, more noble one (like Quaxo or Cricopat); and, finally, a “deep and inscrutable” name known only to themselves “that no human research can discover.”

But now, researchers armed with audio recorders and pattern-recognition software are making unexpected discoveries about the secrets of animal names—at least with small monkeys called marmosets.  

That’s according to a team at Hebrew University in Israel, who claim in the journal Science this week they’ve discovered that marmosets “vocally label” their monkey friends with specific sounds.

Until now, only humans, dolphins, elephants, and probably parrots had been known to use specific sounds to call out to other individuals.

Marmosets are highly social creatures that maintain contact through high-pitched chirps and twitters called “phee-calls.” By recording different pairs of monkeys placed near each other, the team in Israel says they found the animals will adjust their sounds toward a vocal label that’s specific to their conversation partner. 

“It’s similar to names in humans,” says David Omer, the neuroscientist who led the project. “There’s a typical time structure to their calls, and what we report is that the monkey fine-tunes it to encode an individual.”

These names aren’t really recognizable to the human ear; instead, they were identified via a “random forest,” the statistical machine learning technique Omer’s team used to cluster, classify, and analyze the sounds.

To prove they’d cracked the monkey code—and learned the secret names—the team played recordings at the marmosets through a speaker and found they responded more often when their label, or name, was in the recording.

This sort of research could provide clues to the origins of human language, which is arguably the most powerful innovation in our species’ evolution, right up there with opposable thumbs. In years past, it’s been argued that human language is unique and that animals lack both the brains and vocal apparatus to converse.

But there’s growing evidence that isn’t the case, especially now that the use of names has been found in at least four distantly related species. “This is very strong evidence that the evolution of language was not a singular event,” says Omer.

Some similar research tactics were reported earlier this year by Mickey Pardo, a postdoctoral researcher, now at Cornell University, who spent 14 months in Kenya recording elephant calls. Elephants sound alarms by trumpeting, but in reality most of their vocalizations are deep rumbles that are only partly audible to humans.

Pardo also found evidence that elephants use vocal labels, and he says he can definitely get an elephant’s attention by playing the sound of another elephant addressing it. But does this mean researchers are now “speaking animal”? 

Not quite, says Pardo. Real language, he thinks, would mean the ability to discuss things that happened in the past or string together more complex ideas. Pardo says he’s hoping to determine next if elephants have specific sounds for deciding which watering hole to visit—that is, whether they employ place names.

Several efforts are underway to discover if there’s still more meaning in animal sounds than we thought. This year, a group called Project CETI that’s studying the songs of sperm whales found they are far more complex than previously recognized. It means the animals, in theory, could be using a kind of grammar—although whether they actually are saying anything specific isn’t known.

Another effort, the Earth Species Project, aims to use “artificial intelligence to decode nonhuman communication” and has started helping researchers collect more data on animal sounds to feed into those models. 

The team in Israel say they will also be giving the latest types of artificial intelligence a try. Their marmosets live in a laboratory facility, and Omer says he’s already put microphones in monkeys’ living space in order to record everything they say, 24 hours a day.

Their chatter, Omer says, will be used to train a large language model that could, in theory, be used to finish a series of calls that a monkey started, or produce what it predicts is an appropriate reply. But will a primate language model actually make sense, or will it just gibber away without meaning? 

Only the monkeys will be able to say for sure.  

“I don’t have any delusional expectations that they will talk about Nietzsche,” says Omer. “I don’t expect it to be extremely complex like a human, but I would expect it to help us understand something about how our language developed.”