Google Updates Organization Structured Data Documentation via @sejournal, @martinibuster

Google updated the opening paragraph of the Organization Structured Data documentation to provide a clearer, more comprehensive description of its purpose, with improved clarity on four specific points. The result is a model of how to write a high quality opening paragraph.

The new documentation has a high level of information density, which is the quality of communicating more information with less words. The importance of information density is that it communicates ideas better.

The changes add the following four points:

  1. It emphasizes that organization structured data disambiguates one organization from others using structured data properties like iso6523 and naics. Disambiguate means to make something less ambiguous.
  2. It explains how certain Schema.org properties can influence visual elements in search results and in a knowledge panel. A knowledge panel is a box containing data about an organization (or person) that appears on the right side of search results when a user queries about those entities.
  3. It specifically mentions how organization structured data can enrich merchant knowledge panels and brand profiles with details about return policies, addresses, and contact information.
  4. The new opening paragraph now states that there are no required properties for organization structured data and recommends adding as many structured data properties as necessary for relevance. This is not new to the documentation, but it is newly emphasized in the opening paragraph.

None of the four above points existed in the opening paragraph of the previous version of the documentation, serving as an example of how to keep on topic and communicate what a web page is about.

Here is the previous opening paragraph

“You can use organization structured data to let Google know about your organization’s administrative details, for example, logo, address, contact information, and business identifiers. Google can make use of this markup in knowledge panels and other visual elements (such as attribution), which helps users to find your organization’s details on Google Search.”

This is the new opening paragraph:

“Adding organization structured data to your home page can help Google better understand your organization’s administrative details and disambiguate your organization in search results. Some properties are used behind the scenes to disambiguate your organization from other organizations (like iso6523 and naics), while others can influence visual elements in Search results (such as which logo is shown in Search results and your knowledge panel). If you’re a merchant, you can influence more details in your merchant knowledge panel and brand profile, such as return policy, address, and contact information. There are no required properties; instead, we recommend adding as many properties that are relevant to your organization.”

The above opening paragraph is the only change to the document and yet it vastly improves the entire document because a reader knows what to expect as they continue reading.

Takeaways:

Understanding why the documentation is improved gives publishers and SEOs new ideas for understanding why a web page needs to be refreshed.

The changes to Google’s documentation improve it by clearly explaining what the web page is about, how organization structured data benefits users, and identifying a specific class of users who especially benefit from this kind of structured data. The opening paragraph invites readers to continue reading by ensuring they understand the web page’s purpose.

Why the new documentation is improved:

  • Higher information density
  • Improved topicality (communicates what the document is about)
  • Gets to the point fast
  • Offers more details and examples
  • Is more comprehensive
  • Mentions users who will benefit ( merchants)

Why the previous version needed to be fixed:

  • Lacked specifics
  • Limited in scope
  • Lacking in actionable information
  • Didn’t prepare the reader for what the entire page is about

Read the new documentation here:

Organization (Organization) structured data

Compare it to the old documentation here:

Organization (Organization) structured data – WaybackMachine

Featured Image by Shutterstock/Mix and Match Studio

Maybe you will be able to live past 122

The UK’s Office of National Statistics has an online life expectancy calculator. Enter your age and sex, and the website will, using national averages, spit out the age at which you can expect to pop your clogs. For me, that figure is coming out at 88 years old.

That’s not too bad, I figure, given that globally, life expectancy is around 73. But I’m also aware that this is a lowball figure for many in the longevity movement, which has surged in recent years. When I interview a scientist, doctor, or investor in the field, I always like to ask about personal goals. I’ve heard all sorts. Some have told me they want an extra decade of healthy life. Many want to get to 120, close to the current known limit of human age. Others have told me they want to stick around until they’re 200. And some have told me they don’t want to put a number on it; they just want to live for as long as they possibly can—potentially indefinitely.

How far can they go? This is a good time to ask the question. The longevity scene is having a moment, thanks to a combination of scientific advances, public interest, and an unprecedented level of investment. A few key areas of research suggest that we might be able to push human life spans further, and potentially reverse at least some signs of aging.

Take, for example, the concept of cellular reprogramming. Nobel Prize–winning research has shown it is possible to return adult cells to a “younger” state more like that of a stem cell. Billions of dollars have been poured into trying to transform this discovery into a therapy that could wind back the age of a person’s cells and tissues, potentially restoring some elements of youth.

Many other avenues are being explored, including a diabetes drug that could have broad health benefits; drugs based on a potential anti-aging compound discovered in the soil of Rapa Nui (Easter Island); attempts to rejuvenate the immune system; gene therapies designed to boost muscle or extend the number of times our cells can divide; and many, many more. Other researchers are pursuing ways to clear out the aged, worn-out cells in our bodies. These senescent cells appear to pump out chemicals that harm the surrounding tissues. Around eight years ago, scientists found that mice cleared of senescent cells lived 25% longer than untreated ones. They also had healthier hearts and took much longer to develop age-related diseases like cancer and cataracts. They even looked younger.

Unfortunately, human trials of senolytics—drugs that target senescent cells—haven’t been quite as successful. Unity Biotechnology, a company cofounded by leading researchers in the field, tested such a drug in people with osteoarthritis. In 2020, the company officially abandoned that drug after it was found to be no better than a placebo in treating the condition.

That doesn’t mean we won’t one day figure out how to treat age-related diseases, or even aging itself, by targeting senescent cells. But it does illustrate how complicated the biology of aging is. Researchers can’t even agree on what the exact mechanisms of aging are and which they should be targeting. Debates continue to rage over how long it’s possible for humans to live—and whether there is a limit at all.

Still, we are getting better at testing potential therapies in more humanlike models. We’re finding new and improved ways to measure the aging process itself. The X Prize is offering $101 million to researchers who find a way to restore at least 10 years of “muscle, cognitive, and immune function” in 65- to 80-year-olds with a treatment that takes one year or less to administer. Given that the competition runs for seven years, it’s a tall order; Jamie Justice, executive director of the X Prize’s health-span domain, told me she initially fought back on the challenging goal and told the organization’s founder, Peter Diamandis, there was “no way” researchers could achieve it. But we’ve seen stranger things in science. 

Some people are banking on this kind of progress. Not just the billionaires who have already spent millions of dollars and a significant chunk of their time on strategies that might help them defy aging, but also the people who have opted for cryopreservation. There are hundreds of bodies in storage—bodies of people who believed they might one day be reanimated. For them, the hopes are slim. I asked Justice whether she thought they stood a chance at a second life. “Honest answer?” she said. “No.”

It looks likely that something will be developed in the coming decades that will help us live longer, in better health. Not an elixir for eternal life, but perhaps something—or a few somethings—that can help us stave off some of the age-related diseases that tend to kill a lot of us. Such therapies may well push life expectancy up. I don’t feel we need a massive increase, but perhaps I’ll feel differently when I’m approaching 88.

The ONS website gives me a one in four chance of making it to 96, and a one in 10 chance of seeing my 100th birthday. To me, that sounds like an impressive number—as long as I get there in semi-decent health.

I’d still be a long way from the current record of 122 years. But it might just be that there are some limitations we must simply come to terms with—as individuals and in society at large. In a 2017 paper making the case for a limit to the human life span, scientists Jan Vijg and Eric Le Bourg wrote something that has stuck with me—and is worth bearing in mind when considering the future of human longevity: “A species does not need to live for eternity to thrive.” 

AI and the future of sex

The power of pornography doesn’t lie in arousal but in questions. What is obscene? What is ethical or safe to watch? 

We don’t have to consume or even support it, but porn will still demand answers. The question now is: What is “real” porn? 

Anti-porn crusades have been at the heart of the US culture wars for generations, but by the start of the 2000s, the issue had lost its hold. Smartphones made porn too easy to spread and hard to muzzle. Porn became a politically sticky issue, too entangled with free speech and evolving tech. An uneasy truce was made: As long as the imagery was created by consenting adults and stayed on the other side of paywalls and age verification systems, it was to be left alone. 

But today, as AI porn infiltrates dinner tables, PTA meetings, and courtrooms, that truce may not endure much longer. The issue is already making its way back into the national discourse; Project 2025, the Heritage Foundation–backed policy plan for a future Republican administration, proposes the criminalization of porn and the arrest of its creators.

But what if porn is wholly created by an algorithm? In that case, whether it’s obscene, ethical, or safe becomes secondary to What does it mean for porn to be “real”—and what will the answer demand from all of us? 

During my time as a filmmaker in adult entertainment, I witnessed seismic shifts: the evolution from tape to digital, the introduction of new HIV preventions, and the disruption of the industry by free streaming and social media. An early tech adopter, porn was an industry built on desires, greed, and fantasy, propped up by performances and pharmaceuticals. Its methods and media varied widely, but the one constant was its messy humanity. Until now.

What does it mean for porn to be “real”—and what will the answer demand from all of us?

When AI-generated pornography first emerged, it was easy to keep a forensic distance from the early images and dismiss them as a parlor trick. They were laughable and creepy: cheerleaders with seven fingers and dead, wonky eyes. Then, seemingly overnight, they reached uncanny photorealism. Synthetic erotica, like hentai and CGI, has existed for decades, but I had never seen porn like this. These were the hallucinations of a machine trained on a million pornographic images, both the creation of porn and a distillation of it. Femmes fatales with psychedelic genitalia, straight male celebrities in same-sex scenes, naked girls in crowded grocery stores—posted not in the dark corners of the internet but on social media. The images were glistening and warm, raising fresh questions about consent and privacy. What would these new images turn us into?

In September of 2023, the small Spanish town of Almendralejo was forced to confront this question. Twenty girls returned from summer break to find naked selfies they’d never taken being passed around at school. Boys had rendered the images using an AI “nudify” app with just a few euros and a yearbook photo. The girls were bullied and blackmailed, suffered panic attacks and depression. The youngest was 11. The school and parents were at a loss. The tools had arrived faster than the speed of conversation, and they did not discriminate. By the end of the school year, similar cases had spread to Australia, Quebec, London, and Mexico. Then explicit AI images of Taylor Swift flooded social media. If she couldn’t stop this, a 15-year-old from Michigan stood no chance.

The technology behind pornography never slows down, regardless of controversies. When students return to school this fall, it will be in the shadow of AI video engines like Sora and Runway 3, which produce realistic video from text prompts and photographs. If still images have caused so much global havoc, imagine what video could do and where the footage could end up. 

As porn becomes more personal, it’s also becoming more personalized. Users can now check boxes on a list of options as long as the Cheesecake Factory menu to create their ideal scenes: categories like male, female, and trans; ages from 18 to 90; breast and penis size; details like tan lines and underwear color; backdrops like grocery stores, churches, the Eiffel Tower, and Stonehenge; even weather, like tornadoes. It may be 1s and 0s, but AI holds no binary; it holds no judgment or beauty standards. It can render seldom-represented bodies, like those of mature, transgender, and disabled people, in all pairings. Hyper-customizable porn will no longer require performers—only selections and an answer to the question “What is it that I really like?” While Hollywood grapples with the ethics of AI, artificial porn films will become a reality. Celebrities may boost their careers by promoting their synthetic sex tapes on late-night shows.

The progress of AI porn may shift our memories, too. AI is already used to extend home movies and turn vintage photos into live-action scenes. What happens when we apply this to sex? Early sexual images etch themselves on us: glimpses of flesh from our first crush, a lost lover, a stranger on the bus. These erotic memories depend on the specific details for their power: a trail of hair, panties in a specific color, sunlight on wet lips, my PE teacher’s red gym shorts. They are ideal for AI prompts. 

Porn and real-life sex affect each other in a loop. If people become accustomed to getting exactly what they want from erotic media, this could further affect their expectations of relationships. A first date may have another layer of awkwardness if each party has already seen an idealized, naked digital doppelganger of the other. 

Despite (or because of) this blurring of lines, we may actually start to see a genre of “ethical porn.” Without the need for sets, shoots, or even performers, future porn studios might not deal with humans at all. This may be appealing for some viewers, who can be sure that new actors are not underage, trafficked, or under the influence.

A synergy has been brewing since the ’90s, when CD-ROM games, life-size silicone dolls, and websites introduced “interactivity” to adult entertainment. Thirty years later, AI chatbot “partners” and cheaper, lifelike sex dolls are more accessible than ever. Porn tends to merge all available tech toward complete erotic immersion. The realism of AI models has already broken the dam to the uncanny valley. Soon, these avatars will be powered by chatbots and embodied in three-dimensional prosthetics, all existing in virtual-reality worlds. What follows will be the fabled sex robot. 

So what happens when we’ve removed the “messy humanity” from sex itself? Porn is defined by the needs of its era. Ours has been marked by increasing isolation. The pandemic further conditioned us to digitize our most intimate moments, bringing us FaceTime hospital visits and weddings, and caused a deep discharge of our social batteries. Adult entertainment may step into that void. The rise of AI-generated porn may be a symptom of a new synthetic sexuality, not the cause. In the near future, we may find this porn arousing because of its artificiality, not in spite of it.

Leo Herrera is a writer and artist. He explores how tech intersects with sex and culture on Substack at Herrera Words.

Inside the long quest to advance Chinese writing technology

Every second of every day, someone is typing in Chinese. In a park in Hong Kong, at a desk in Taiwan, in the checkout line at a Family Mart in Shanghai, the automatic doors chiming a song each time they open. Though the mechanics look a little different from typing in English or French—people usually type the pronunciation of a character and then pick it out of a selection that pops up, autocomplete-style—it’s hard to think of anything more quotidian. The software that allows this exists beneath the awareness of pretty much everyone who uses it. It’s just there.

cover of The Chinese Computer by Tom Mullaney
The Chinese Computer: A Global History of the Information Age
Thomas S. Mullaney
MIT PRESS, 2024

What’s largely been forgotten—and what most people outside Asia never even knew in the first place—is that a large cast of eccentrics and linguists, engineers and polymaths, spent much of the 20th century torturing themselves over how Chinese was ever going to move away from the ink brush to any other medium. This process has been the subject of two books published in the last two years: Thomas Mullaney’s scholarly work The Chinese Computer and Jing Tsu’s more accessible Kingdom of Characters. Mullaney’s book focuses on the invention of various input systems for Chinese starting in the 1940s, while Tsu’s covers more than a century of efforts to standardize Chinese and transmit it using the telegraph, typewriter, and computer. But both reveal a story that’s tumultuous and chaotic—and just a little unsettling in the futility it reflects.   

cover of Kingdom of Characters
Kingdom of Characters: The Language Revolution That Made China Modern
Jing Tsu
RIVERHEAD BOOKS, 2022

Chinese characters are not as cryptic as they sometimes appear. The general rule is that they stand for a word, or sometimes part of a word, and learning to read is a process of memorization. Along the way, it becomes easier to guess how a character should be spoken, because often phonetic elements are tucked in among other symbols. The characters were traditionally written by hand with a brush, and part of becoming literate involves memorizing the order in which the strokes are made. Put them in the wrong order and the character doesn’t look right. Or rather, as I found some years ago as a second-language learner in Guangzhou, China, it looks childish. (My husband, a translator of Chinese literature, found it hilarious and adorable that at the age of 30, I wrote like a kindergartner.)

The trouble, however, is that there are a lot of characters. One needs to know at least a few thousand to be considered basically literate, and there are thousands more beyond that basic set. Many modern learners of Chinese devote themselves essentially full-time to learning to read, at least in the beginning. More than a century ago, this was such a monumental task that leading thinkers worried it was impairing China’s ability to survive the attentions of more aggressive powers.

In the 19th century, a huge proportion of Chinese people were illiterate. They had little access to schooling. Many were subsistence farmers. China, despite its immense population and vast territory, was perpetually finding itself on the losing end of deals with nimbler, more industrialized nations. The Opium Wars, in the mid-19th century, had led to a situation where foreign powers effectively colonized Chinese soil. What advanced infrastructure there was had been built and was owned by foreigners.  

Some felt these things were connected. Wang Zhao, for one, was a reformer who believed that a simpler way to write spoken Chinese was essential to the survival of the nation. Wang’s idea was to use a set of phonetic symbols, representing one specific dialect of Chinese. If people could sound out words, having memorized just a handful of shapes the way speakers of languages using an alphabet did, they could become literate more quickly. With literacy, they could learn technical skills, study science, and help China get ownership of its future back. 

Wang believed in this goal so strongly that though he’d been thrown out of China in 1898, he returned two years later in disguise. After arriving by boat from Japan, he traveled over land on foot in the costume of a Buddhist monk. His story forms the first chapter of Jing Tsu’s book, and it is thick with drama, including a shouting match and brawl on the grounds of a former palace, during a meeting to decide which dialect a national version of such a system should represent. Wang’s system for learning Mandarin was used by schools in Beijing for a few years, but ultimately it did not survive the rise of competing systems and the period of chaos that swallowed China not long after the Qing Dynasty’s fall in 1911. Decades of disorder and uneasy truces gave way to Japan’s invasion of Manchuria in northern China in 1931. For a long time, basic survival was all most people had time for.

However, strange inventions soon began to turn up in China. Chinese students and scientists abroad had started to work on a typewriter for the language, which they felt was lagging behind others. Texts in English and other tongues using Roman characters could be printed swiftly and cheaply with keyboard-controlled machines that injected liquid metal into type molds, but Chinese texts required thousands upon thousands of bits of type to be placed in a manual printing press. And while English correspondence could be whacked out on a typewriter, Chinese correspondence was still, after all this time, written by hand.      

Of all the technologies Mullaney and Tsu describe, these baroque metal monsters stick most in the mind. Equipped with cylinders and wheels, with type arrayed in starbursts or in a massive tray, they are simultaneously writing machines and incarnations of philosophies about how to organize a language. Because Chinese characters don’t have an inherent order (no A-B-C-D-E-F-G) and because there are so many (if you just glance at 4,000 of them, you’re not likely to spot the one you need quickly), people tried to arrange these bits of type according to predictable rules. The first article ever published by Lin Yutang, who would go on to become one of China’s most prominent writers in English, described a system of ordering characters according to the number of strokes it took to form them. He eventually designed a Chinese typewriter that consumed his life and finances, a lovely thing that failed its demo in front of potential investors.

woman using a large desk-sized terminal
Chinese keyboard designers considered many interfaces, including tabletop-size devices that included 2,000 or more commonly used characters.
PUBLIC DOMAIN/COURTESY OF THOMAS S. MULLANEY

Technology often seems to demand new ways of engaging with the physical, and the Chinese typewriter was no exception. When I first saw a functioning example, at a private museum in a basement in Switzerland, I was entranced by the gliding arm and slender rails of the sheet-cake-size device, its tray full of characters. “Operating the machine was a full-body exercise,” Tsu writes of a very early typewriter from the late 1890s, designed by an American missionary. Its inventor expected that with time, muscle memory would take over, and the typist would move smoothly around the machine, picking out characters and depressing keys. 

However, though Chinese typewriters eventually got off the ground (the first commercial typewriter was available in the 1920s), a few decades later it became clear that the next challenge was getting Chinese characters into the computer age. And there was still the problem of how to get more people reading. Through the 1930s, ’40s, ’50s, and ’60s, systems for ordering and typing Chinese continued to occupy the minds of intellectuals; particularly odd and memorable is the story of the librarian at Sun Yat-sen University in Guangzhou, who in the 1930s came up with a system of light and dark glyphs like semaphore flags to stand for characters. Mullaney and Tsu both linger on the case of Zhi Bingyi, an engineer imprisoned in solitary confinement during the Cultural Revolution in the late 1960s, who was inspired by the characters of a slogan written on his cell wall to devise his own code for inputting characters into a computer.

As the child of a futurist, I’ve seen firsthand that the path to where we are is littered with technological dead ends.

The tools for literacy were advancing over the same period, thanks to government-­mandated reforms introduced after the Communist Revolution in 1949. To assist in learning to read, everyone in mainland China would now be taught pinyin, a system that uses Roman letters to indicate how Chinese characters are pronounced. Meanwhile, thousands of characters would be replaced with simplified versions, with fewer strokes to learn. This is still how it’s done today in the mainland, though in Taiwan and Hong Kong, the characters are not simplified, and Taiwan uses a different pronunciation guide, one based on 37 phonetic symbols and five tone marks. 

Myriad ideas were thrown at the problem of getting these characters into computers. Images of a graveyard of failed designs—256-key keyboards and the enormous cylinder of the Ideo-Matic Encoder, a keyboard with more than 4,000 options—are scattered poignantly through Mullaney’s pages. 

In Tsu’s telling, perhaps the most consequential link between this awkward period of dedicated hardware and today’s wicked-quick mobile-phone typing came in 1988, with an idea hatched by engineers in California. “Unicode was envisioned as a master converter,” she writes. “It would bring all human script systems, Western, Chinese, or otherwise, under one umbrella standard and assign each character a single, standardized code for communicating with any machine.” Once Chinese characters had Unicode codes, they could be manipulated by software like any other glyph, letter, or symbol. Today’s input systems allow users to call up and select characters using pinyin or stroke order, among other options.

There is something curiously deflating, however, about the way both these books end. Mullaney’s careful documenting of the typing machines of the last century and Tsu’s collection of adventurous tales about language show the same thing: A simply unbelievable amount of time, energy, and cleverness was poured into making Chinese characters easier for both machines and the human mind to manipulate. But very few of these systems seem to have had any direct impact on the current solutions, like the pronunciation-led input systems that more than a billion people now use to type Chinese. 

This pattern of evolution isn’t unique to language. As the child of a futurist, I’ve seen firsthand that the path to where we are is littered with technological dead ends. The month after Google Glass, the glasses-borne computer, made headlines, my mother helped set up an exhibit of personal heads-up displays. In the obscurity of a warehouse space, ghostly white foam heads each bore a crown of metal, glass, and plastic, the attempts of various inventors to put a screen in front of our eyes. Augmented reality seemed as if it might finally be arriving in the hands of the people—or, rather, on their faces. 

That version of the future did not materialize, and if augmented-reality viewing ever does become part of everyday life, it won’t be through those objects. When historians write about these devices, in books like these, I don’t think they will be able to trace a chain of unbroken thought, a single arc from idea to fruition.

A charming moment, late in Mullaney’s book, speaks to this. He has been slipping letters in the mailboxes of people he’s found listed as inventors of input methods in the Chinese patent database, and now he’s meeting one such inventor, an elderly man, and his granddaughter in a Beijing Starbucks. The old fellow is pleased to talk about his approach, which involves the graphical shapes of Chinese characters. But his granddaughter drops a bomb on Mullaney when she leans in and whispers, “I think my input system is a bit easier to use.” It turns out both she and her father have built systems of their own. 

The story’s not over, in other words.    

People tinker with technology and systems of thought like those detailed in these two books not just because they have to, but because they want to. And though it’s human nature to want to make a trajectory out of what lies behind us so that the present becomes a grand culmination, what these books detail are episodes in the life of a language. There is no beginning, no middle, no satisfying end. There is only evolution—an endless unfurling of something always in the process of becoming a fuller version of itself. 

Veronique Greenwood is a science writer and essayist based in England. Her work has appeared in the New York Times, the Atlantic, and many other publications.

Readying business for the age of AI

Rapid advancements in AI technology offer unprecedented opportunities to enhance business operations, customer and employee engagement, and decision-making. Executives are eager to see the potential of AI realized. Among 100 c-suite respondents polled in WNS Analytics’ “The Future of Enterprise Data & AI” report, 76% say they are already implementing or planning to implement generative AI solutions. Among those same leaders, however, 67% report struggling with data migration, and others cite grappling with data quality, talent shortages, and data democratization issues. 

MIT Technology Review Insights recently had a conversation with Alex Sidgreaves, chief data officer at Zurich Insurance; Bogdan Szostek, chief data officer at Animal Friends; Shan Lodh, director of data platforms at Shawbrook Bank; and Gautam Singh, head of data, analytics, and AI at WNS Analytics, to discuss how enterprises can navigate the burgeoning era of AI.

AI across industries

There is no shortage of AI use cases across sectors. Retailers are tailoring shopping experiences to individual preferences by leveraging customer behavior data and advanced machine learning models. Traditional AI models can deliver personalized offerings. However, with generative AI, these personalized offerings are elevated by incorporating tailored communication that considers the customer’s persona, behavior, and past interactions. In insurance, by leveraging generative AI, companies can identify subrogation recovery opportunities that a manual handler might overlook, enhancing efficiency and maximizing recovery potential. Banking and financial services institutions are leveraging AI to bolster customer due diligence and enhance anti-money laundering efforts by leveraging AI-driven credit risk management practices. AI technologies are enhancing diagnostic accuracy through sophisticated image recognition in radiology, allowing for earlier and more precise detection of diseases while predictive analytics enable personalized treatment plans.

The core of successful AI implementation lies in understanding its business value, building a robust data foundation, aligning with the strategic goals of the organization, and infusing skilled expertise across every level of an enterprise.

  • “I think we should also be asking ourselves, if we do succeed, what are we going to stop doing? Because when we empower colleagues through AI, we are giving them new capabilities [and] faster, quicker, leaner ways of doing things. So we need to be true to even thinking about the org design. Oftentimes, an AI program doesn’t work, not because the technology doesn’t work, but the downstream business processes or the organizational structures are still kept as before.” Shan Lodh, director of data platforms, Shawbrook Bank

Whether automating routine tasks, enhancing customer experiences, or providing deeper insights through data analysis, it’s essential to define what AI can do for an enterprise in specific terms. AI’s popularity and broad promises are not good enough reasons to jump headfirst into enterprise-wide adoption. 

“AI projects should come from a value-led position rather than being led by technology,” says Sidgreaves. “The key is to always ensure you know what value you’re bringing to the business or to the customer with the AI. And actually always ask yourself the question, do we even need AI to solve that problem?”

Having a good technology partner is crucial to ensure that value is realized. Gautam Singh, head of data, analytics, and AI at WNS, says, “At WNS Analytics, we keep clients’ organizational goals at the center. We have focused and strengthened around core productized services that go deep in generating value for our clients.” Singh explains their approach, “We do this by leveraging our unique AI and human interaction approach to develop custom services and deliver differentiated outcomes.”

The foundation of any advanced technology adoption is data and AI is no exception. Singh explains, “Advanced technologies like AI and generative AI may not always be the right choice, and hence we work with our clients to understand the need, to develop the right solution for each situation.” With increasingly large and complex data volumes, effectively managing and modernizing data infrastructure is essential to provide the basis for AI tools. 

This means breaking down silos and maximizing AI’s impact involves regular communication and collaboration across departments from marketing teams working with data scientists to understand customer behavior patterns to IT teams ensuring their infrastructure supports AI initiatives. 

  • “I would emphasize the growing customer’s expectations in terms of what they expect our businesses to offer them and to provide us a quality and speed of service. At Animal Friends, we see the generative AI potential to be the biggest with sophisticated chatbots and voice bots that can serve our customers 24/7 and deliver the right level of service, and being cost effective for our customers. Bogdan Szostek, chief data officer, Animal Friends

Investing in domain experts with insight into the regulations, operations, and industry practices is just as necessary in the success of deploying AI systems as the right data foundations and strategy. Continuous training and upskilling are essential to keep pace with evolving AI technologies.

Ensuring AI trust and transparency

Creating trust in generative AI implementation requires the same mechanisms employed for all emerging technologies: accountability, security, and ethical standards. Being transparent about how AI systems are used, the data they rely on, and the decision-making processes they employ can go a long way in forging trust among stakeholders. In fact, The Future of Enterprise Data & AI report cites 55% of organizations identify “building trust in AI systems among stakeholders” as the biggest challenge when scaling AI initiatives. 

“We need talent, we need communication, we need the ethical framework, we need very good data, and so on,” says Lodh. “Those things don’t really go away. In fact, they become even more necessary for generative AI, but of course the usages are more varied.” 

AI should augment human decision-making and business workflows. Guardrails with human oversight ensure that enterprise teams have access to AI tools but are in control of high-risk and high-value decisions.

“Bias in AI can creep in from almost anywhere and will do so unless you’re extremely careful. Challenges come into three buckets. You’ve got privacy challenges, data quality, completeness challenges, and then really training AI systems on data that’s biased, which is easily done,” says Sidgreaves. She emphasizes it is vital to ensure that data is up-to-date, accurate, and clean. High-quality data enhances the reliability and performance of AI models. Regular audits and data quality checks can help maintain the integrity of data.

An agile approach to AI implementation

ROI is always top of mind for business leaders looking to cash in on the promised potential of AI systems. As technology continues to evolve rapidly and the potential use cases of AI grow, starting small, creating measurable benchmarks, and adopting an agile approach can ensure success in scaling solutions. By starting with pilot projects and scaling successful initiatives, companies can manage risks and optimize resources. Sidgreaves, Szostek, and Lodh stress that while it may be tempting to throw everything at the wall and see what sticks, accessing the greatest returns from expanding AI tools means remaining flexible, strategic, and iterative. 

In insurance, two areas where AI has a significant ROI impact are risk and operational efficiency. Sidgreaves underscores that reducing manual processes is essential for large, heritage organizations, and generative AI and large language models (LLMs) are revolutionizing this aspect by significantly diminishing the need for manual activities.

To illustrate her point, she cites a specific example: “Consider the task of reviewing and drafting policy wording. Traditionally, this process would take an individual up to four weeks. However, with LLMs, this same task can now be completed in a matter of seconds.”  

Lodh adds that establishing ROI at the project’s onset and implementing cross-functional metrics are crucial for capturing a comprehensive view of a project’s impact. For instance, using LLMs for writing code is a great example of how IT and information security teams can collaborate. By assessing the quality of static code analysis generated by LLMs, these teams can ensure that the code meets security and performance standards.

“It’s very hard because technology is changing so quickly,” says Szostek. “We need to truly apply an agile approach, do not try to prescribe all the elements of the future deliveries in 12, 18, 24 months. We have to test and learn and iterate, and also fail fast if that’s needed.” 

Navigating the future of the AI era 

The rapid evolution of the digital age continues to bring immense opportunities for enterprises globally from the c-suite to the factory floor. With no shortage of use cases and promises to boost efficiencies, drive innovation, and improve customer and employee experiences, few business leaders dismiss the proliferation of AI as mere hype. However, the successful and responsible implementation of AI requires a careful balance of strategy, transparency, and robust data privacy and security measures.

  • “It’s really easy as technology people to be driven by the next core thing, but we would have to be solving a business problem. So the key is to always ensure you know what value you’re bringing to the business or to the customer with the AI. And actually always ask yourself the question, do we even need AI to solve that problem?” — Alex Sidgreaves, chief data officer, Zurich Insurance

Fully harnessing the power of AI while maintaining trust means defining clear business values, ensuring accountability, managing data privacy, balancing innovation with ethical use, and staying ahead of future trends. Enterprises must remain vigilant and adaptable, committed to ethical practices and an agile approach to thrive in this rapidly changing business landscape.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

New Ecommerce Tools: August 26, 2024

This week’s rundown of new products for merchants includes digital marketing resources, product videos, search engine optimization tools, AI-powered helpdesk assistants, WooCommerce financing, live-stream shopping, Amazon India discounts, and more.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants: August 26, 2024

GoDaddy releases AI-powered digital marketing tools for entrepreneurs. GoDaddy’s new all-in-one AI product, called Digital Marketing, aims to increase the availability of tools to manage search engine optimization, social media, and email marketing. Digital Marketing leverages GoDaddy Studio and GoDaddy Airo for email marketing campaigns, branded images, and videos. Digital Marketing can also be upgraded to unlock generative AI tools such as digital ads from Yelp, help from the Airo assistant, and the marketing calendar.

Home page of GoDaddy

GoDaddy

YouTube and  Shopify expand video offering. YouTube has extended its partnership with Shopify. All eligible U.S. Shopify Plus and Advanced merchants can now sign up for YouTube Shopping’s affiliate program through the Google and YouTube app on Shopify. Shopify merchants can leverage the program to showcase products in their videos via creators. Merchants can manage the program and view analytics, content, and products from Google Merchant Center.

Amazon to launch Prime Big Deal Days sale in October. Amazon will run its Prime Big Deal Days event in October this year. Amazon has notified U.S. FBA sellers to deliver their inventory by September 13 and by October 19 for Black Friday and Cyber Monday events. For a limited time, sellers will receive 25% off the base rate for storage fees and 15% off the base rate for both processing and transportation fees.

PayPal expands partnership with Adyen in the U.S. PayPal has announced the expansion of its partnership with Adyen, a global financial technology platform. Adyen will offer Fastlane by PayPal to accelerate guest checkout flows for its U.S. enterprise and marketplace customers, with plans to extend the offering globally. The companies expect Fastlane by PayPal to improve consumer shopping experiences and enhance conversion for businesses leveraging Adyen’s platform.

Web page of Fastlane by PayPal

Fastlane by PayPal

Tilt raises $18 million for real-time shopping app. Tilt, a U.K.-based live-shopping startup, has raised $18 million in Series A funding, led by Balderton Capital. According to Tilt, the app has garnered more than 500,000 users since its launch last year. Founded by two ex-employees of Revolut (a financial technology provider), Tilt says its success has stemmed from a focus on live sales of fashion items. The app features in-stream deals and offers, live auctions, interactive events, and more.

Duda partners with ActiveCampaign on email marketing and automation capabilities. Duda, a website builder for digital agencies and SaaS platforms, has announced its integration with ActiveCampaign, a marketing automation, email marketing, and CRM platform. The integration allows Duda customers to leverage ActiveCampaign’s marketing platform. According to Duda, the partnership provides access to more than 20,000 agencies and SaaS companies, supporting over 1 million client websites.

OroCommerce now available on the Oracle Cloud Marketplace. OroCommerce, a B2B-focused commerce platform, is now available on the Oracle Cloud Marketplace and can be deployed on Oracle Cloud Infrastructure. Oracle Cloud Marketplace is a centralized repository of enterprise applications offered by Oracle and its partners. OroCommerce includes CRM and marketplace capabilities with online storefronts optimized for the needs of B2B sellers, with request-for-quote, configure-price-quote, a low-code workflow automation tool, and more.

Home page of OroCommerce

OroCommerce

Tento launches as exclusive lending partner for WooCommerce. Tento, a fintech specializing in financial tools for small and medium-sized businesses, has announced its selection as an exclusive lender for the WooCommerce platform in the U.S. Tento will provide customized lending solutions for WooCommerce merchants, from established brands to startup solopreneurs. According to Tento, the partnership will offer Woocommerce merchants access to working capital, equipment financing, lines of credit, and more.

Rapid URL Indexer launches WordPress plugin for automated Google indexing. Rapid URL Indexer, an SEO tool provider, has released its Rapid URL Indexer for WP plugin that automates the process of submitting WordPress content for indexing on Google search. Users have full control over which post types are automatically submitted for indexing, allowing for a tailored approach to SEO strategy. The plugin also features bulk submission, detailed logging and monitoring, and email notifications.

Amazon India reduces selling fees by up to 12%. Amazon India has announced a 3% to 12% reduction in selling fees for 59 product sub-categories on the marketplace. Effective on September 9, the reductions apply across multiple product segments, such as home furnishing (9% drop), indoor lighting (8% drop), and home products (8% drop). According to Amazon, the decreases will help sellers in time for the Diwali holiday shopping rush.

Knowledge base platform Helpjuice unveils Swifty AI for in-app support. Helpjuice, a knowledge base platform, has announced the launch of Swifty, an AI-powered widget designed to transform customer support. Swifty combines AI with an extensive knowledge base to deliver comprehensive user experiences. Swifty resides in the corner of visitors’ screens, providing relevant answers by accessing and processing information from the knowledge base. Users can search for articles, access FAQs, or engage in natural conversations with the AI chatbot.

Home page of Helpjuice

Helpjuice

Get Your Free Bing SEO Scan

Search engine optimization tools can be pricey. But a few, such as Bing Webmaster Tools, are free. It provides several helpful resources, including site scans.

To use, verify your site with Bing. The process is easy if you’ve already verified on Google Search Console. Choose “Import sites from Google Search Console” and allow access through your Google account. Otherwise, verify through DNS, meta tags, or a file upload.

Once verified, allow Bing Webmaster Tools a day or two to collect performance and optimization info. Then go to the “Site Scan” section to start the crawl. (It won’t launch automatically.)

The scan can take about a day. The ensuing SEO report contains “Errors,” “Warnings,” and “Notices.”

Screenshot of a sample site scan report from Bing.

Bing Webmaster Tools site scan report contains “Errors,” “Warnings,” and “Notices.” Click image to enlarge.

Errors

This section lists all pages with missing or incorrect components affecting organic search. Examples include:

  • Missing meta descriptions,
  • Pages returning 400-499 errors,
  • Broken redirects,
  • Broken canonical URLs,
  • Missing title tags,
  • Pages blocked by robots.txt,
  • Pages returning server timeouts,

Of the reported errors, missing meta descriptions are the least worrisome since they don’t affect search results. Bing will generate a description itself, which it would likely do anyway based on the query.

In my experience, all other errors require fixing. Clicking any error will produce an explanation, instructions for resolving, and the affected pages.

For example, Bing provides the following explanation for the “missing tag” error:

What is the issue about?

The title is missing in the head section of the page. Search engines use the title tag as an important signal for determining the page’s relevancy for a given keyword search. It is important to ensure that your title tag is unique and descriptive, and contains accurate information about the content of the page. The title should be unique to each page on your website.

How to fix?

Add a title to your page using the title tag, which should be placed inside the section of the page’s source. Write a concise, descriptive, keyword-rich page title that best describes the page content.

Warnings

The Warnings section contains issues that could improve your site’s organic rankings but not prevent them. On my Smarty Marketing site, Bing reported these warnings:

  • Images missing alt attributes,
  • Pages with too-long title tags or meta descriptions,
  • Pages with missing H1 tags,
  • HTML is too long,
  • Meta language tag missing.

Not all of these are critical. I don’t pay much attention to warnings on title tags, meta descriptions, meta language tags, and too-long HTML. Search crawlers are increasingly advanced and can handle longer HTML and missing language tags. Plus, Google has confirmed using longer title tags for ranking signals even if truncated in search listings.

Nonetheless, Bing’s recommendations won’t hurt. Certainly image alt texts and H1 tags could improve rankings.

Overall, Bing’s SEO report covers the basics. It’s an overview of potentially critical errors — at no cost.

Survey Shows Where SEOs Get Their Clients via @sejournal, @martinibuster

Aleyda Solis recently published the results of her SEOFOMO survey that revealed actionable facts about where SEOs are getting their clients, what kinds of clients they’re getting and how they’re promoting themselves to get more clients.

Aleyda Solis (LinkedIn) is a popular digital marketer specializing in International SEO. She publishes the SEOFOMO + MarketingFOMO Newsletters, is well known for promoting best practices in SEO and speaking at search conferences around the world.

This survey, conducted with a self-selected group of 337 respondents, offers a view of how SEO businesses get more business, who their clients are and what they do for them in 2024. The data offers valuable insights for digital marketers who want to grow their businesses.

For example, most of the respondents indicated that their area of specialization is Technical SEO. That makes sense because fixing website issues that make it easy for search engines to crawl, index and understand content is at the hear of what it means to optimize a site for search engines.

This is the kind of work SEOs specialize in:

  • 64% Technical SEO
  • 60% Content Focused SEO
  • 54% SEO Generalist
  • 51% SEO Management
  • 14% Link Builder
  • 8% Other

Only 14% of respondents answered that they build links for clients. That doesn’t mean that link building is not a priority but rather it may reflect that link building continues to be a difficult area to specialize in because of its complexity.

Who Answered The Survey

Aleyda’s survey consists of a sample of 337 respondents who self-identify as SEO specialists.

44% of respondents are Full-Time Agency SEO consultant
32 % are freelance and independent SEO consultants
24% are part-time SEO consultants
6% identified simply as Other

Data About SEO Clients

The results of the survey indicated that most of the respondents work with small and medium sized clients, with 49% working with sites less than 100K URLs and 33% working with client sites containing between 100k to 1 million URLs.

Local search SEO is perhaps the largest segment of small to medium businesses. This market, once challenging to scale with quality service at a reasonable price, has evolved. Today, agencies can partner with platforms like Wix or Duda, which both offer a high-performance website building platform that integrate marketing and SEO tools and also streamline client management and billing. These new technologies enables SEO agencies of any size to more easily scale, allowing them to focus on marketing and SEO rather than the underlying platform technology.

SEOs who self-identified as working with larger clients represented a total of 18% of the respondents. I’ve worked with large multinational B2B companies and there’s good money in it but it’s also can be frustratingly limited to how much influence you can have with the SEO. You just have to take your wins where you can get them and accept that you’ll never get your wish list fulfilled.

This Is Where SEOs Get Their Business

There is a great diversity in where SEOs get their business, which perhaps reflects individual priorities and strengths. Not everyone has the time to be a social media influencer nor the ability to get up and speak in front of audiences at industry events, right?

Referrals

The most highly cited source of clients is referrals from other companies and clients, accounting for 77% of responses. Putting in the hard work to cultivate happy clients and to promote an SEO business pays off in referrals.

Ranking In Search Results

SEOs who focus on ranking for phrases that brings them more clients was the second largest reported source of new clients. This approach makes sense for SEOs who have a geographic reach or who focus on a single vertical like personal injury. Combining a vertical (like personal injury or HVAC) with a geographic market could be easier to rank than trying to rank for the general term, especially given that Google tends to personalize general search queries.

Social Media

Participating in social media was reported by many respondents as a source of SEO clients, with 26% indicating that social media is a source of new clients and referrals.

That totally makes sense. I’ve seen SEOs with little experience come out of nowhere and get SEO business by extensive social media posting and participation in conversations as well as starting conversations. There’s a strong validating component that comes with cultivating a large social media following so it makes sense that social media is an important channel for obtaining referrals.

Something I see in the most successful SEOs who promote with social media is that they don’t tend to post on controversial topics like politics but do tend to get behind popular causes. Social media is a lot of work but the SEOFOMO survey validates social media as an important channel to focus on for building a client base.

Events

Twenty five percent of respondents cited Events as a channel for cultivating clients. Speaking at events can be a good way to get clients or at least to build a good reputation that leads to client referrals.

Speaking at events can be tricky because you have to have something interesting to say but it can’t be too complicated because you’re going to lose or bore the audience. I’ve been speaking at events since around 2004 and have seen others freeze in fear, spontaneously utter crazy things or bore the audience with a monotone speaking style (which is why I sometimes choose to speak first!).

Here are the top sources of SEO client business:

  • Client Or Company Referrals 77%
  • SEO 39%
  • Social Media 26%
  • Industry Events 25%
  • Blog Posts, Studies, Research
  • Other 15%
  • Cold Outreach 14%
  • 9% Paid Outreach

Getting Clients Requires Proactive Work

Despite that so many SEOs reported receiving client and company referrals, reaching that point requires putting in the time to cultivate happy clients, marketing their business in person and online, and demonstrating thought leadership. Everyone has their superpower, some are great speakers, some are great thinkers. Some SEOs find happiness making videos and while others are more comfortable sharing on podcasts. Whatever your superpower is, get out there to discover it then be the best at what you excel at.

Read the results of the survey here:

The SEOFOMO State of SEO Consulting – Survey 2024 Results

Featured Image by Shutterstock/Khosro

Google To Replace Classic Merchant Center By September via @sejournal, @MattGSouthern

Google to replace Merchant Center with new version by September, introducing AI tools and unified management for online and local listings.

  • Google will replace classic Merchant Center with Merchant Center Next by September.
  • New features include Product Studio, consolidated analytics, and unified product management.
  • Transition may pose challenges, but Google promises improved functionality and provides resources.
Google Now Uses Open Graph Title Tag (og:title) For Title Links via @sejournal, @MattGSouthern

In an update to its search documentation, Google has expanded the list of sources it uses to generate title links in search results.

Google now includes the og:title meta tag as one of the elements it considers when automatically creating title links for web pages.

Screenshot from: developers.google.com/search/updates, August 2024.

Title links, which appear as clickable headlines for search results, give people a quick introduction to a webpage and how well it matches their search.

Google’s system for generating title links has long relied on various on-page elements. Adding og:title expands the list of criteria Google uses.

Understanding og:title

The og:title tag allows you to specify a title for your content that may differ from the traditional HTML title tag. This can be useful for optimizing how a page appears when shared on social networks or, now, in search results.

Og:title is part of the Open Graph protocol, a set of meta tags developed by Facebook that allows any page to become a rich object in social graphs.

While it’s used to control how content appears on social media platforms, Google’s inclusion of this tag in its title link sources indicates a broader use of Open Graph data.

Impact On SEO & Content Strategy

With this update, you may need to pay closer attention to og:title tags, ensuring they accurately represent page content while remaining engaging for searchers.

Google’s documentation now lists the following sources for automatically determining title links:

  1. Content in elements
  2. Main visual title shown on the page
  3. Heading elements, such as

    elements

  4. Content in og:title meta tags
  5. Other large and prominent text through style treatments
  6. Other page content
  7. Anchor text on the page
  8. Text within links pointing to the page
  9. Website structured data

While Google says its title link generation is automated, understanding the sources it uses can help you influence how pages appear in search.

Best Practices Remain Unchanged

Google’s best practices for title links remain largely unchanged. The company recommends creating unique, descriptive titles for each page, avoiding keyword stuffing, and ensuring titles accurately reflect page content.

Note that changes to these elements may take time to be reflected in search results, as pages must be recrawled and reprocessed.


Featured Image: Sir. David/Shutterstock