What I learned from the UN’s “AI for Good” summit

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Greetings from Switzerland! I’ve just come back from Geneva, which last week hosted the UN’s AI for Good Summit, organized by the International Telecommunication Union. The summit’s big focus was how AI can be used to meet the UN’s Sustainable Development Goals, such as eradicating poverty and hunger, achieving gender equality, promoting clean energy and climate action and so on. 

The conference featured lots of robots (including one that dispenses wine), but what I liked most of all was how it managed to convene people working in AI from around the globe, featuring speakers from China, the Middle East, and Africa too, such as Pelonomi Moiloa, the CEO of Lelapa AI, a startup building AI for African languages. AI can be very US-centric and male dominated, and any effort to make the conversation more global and diverse is laudable. 

But honestly, I didn’t leave the conference feeling confident AI was going to play a meaningful role in advancing any of the UN goals. In fact, the most interesting speeches were about how AI is doing the opposite. Sage Lenier, a climate activist, talked about how we must not let AI accelerate environmental destruction. Tristan Harris, the cofounder of the Center for Humane Technology, gave a compelling talk connecting the dots between our addiction to social media, the tech sector’s financial incentives, and our failure to learn from previous tech booms. And there are still deeply ingrained gender biases in tech, Mia Shah-Dand, the founder of Women in AI Ethics, reminded us. 

So while the conference itself was about using AI for “good,” I would have liked to see more talk about how increased transparency, accountability, and inclusion could make AI itself good from development to deployment.

We now know that generating one image with generative AI uses as much energy as charging a smartphone. I would have liked more honest conversations about how to make the technology more sustainable itself in order to meet climate goals. And it felt jarring to hear discussions about how AI can be used to help reduce inequalities when we know that so many of the AI systems we use are built on the backs of human content moderators in the Global South who sift through traumatizing content while being paid peanuts. 

Making the case for the “tremendous benefit” of AI was OpenAI’s CEO Sam Altman, the star speaker of the summit. Altman was interviewed remotely by Nicholas Thompson, the CEO of the Atlantic, which has incidentally just announced a deal for OpenAI to share its content to train new AI models. OpenAI is the company that instigated the current AI boom, and it would have been a great opportunity to ask him about all these issues. Instead, the two had a relatively vague, high-level discussion about safety, leaving the audience none the wiser about what exactly OpenAI is doing to make their systems safer. It seemed they were simply supposed to take Altman’s word for it. 

Altman’s talk came a week or so after Helen Toner, a researcher at the Georgetown Center for Security and Emerging Technology and a former OpenAI board member, said in an interview that the board found out about the launch of ChatGPT through Twitter, and that Altman had on multiple occasions given the board inaccurate information about the company’s formal safety processes. She has also argued that it is a bad idea to let AI firms govern themselves, because the immense profit incentives will always win. (Altman said he “disagree[s] with her recollection of events.”) 

When Thompson asked Altman what the first good thing to come out of generative AI will be, Altman mentioned productivity, citing examples such as software developers who can use AI tools to do their work much faster. “We’ll see different industries become much more productive than they used to be because they can use these tools. And that will have a positive impact on everything,” he said. I think the jury is still out on that one. 


Now read the rest of The Algorithm

Deeper Learning

Why Google’s AI Overviews gets things wrong

Google’s new feature, called AI Overviews, provides brief, AI-generated summaries highlighting key information and links on top of search results. Unfortunately, within days of AI Overviews’ release in the US, users were sharing examples of responses that were strange at best. It suggested that users add glue to pizza or eat at least one small rock a day.

MIT Technology Review explains: In order to understand why AI-powered search engines get things wrong, we need to look at how they work. The models that power them simply predict the next word (or token) in a sequence, which makes them appear fluent but also leaves them prone to making things up. They have no ground truth to rely on, but instead choose each word purely on the basis of a statistical calculation. Worst of all? There’s probably no way to fix things. That’s why you shouldn’t trust AI search enginesRead more from Rhiannon Williams here

Bits and Bytes

OpenAI’s latest blunder shows the challenges facing Chinese AI models
OpenAI’s GPT-4o data set is polluted by Chinese spam websites. But this problem is indicative of a much wider issue for those building Chinese AI services: finding the high-quality data sets they need to be trained on is tricky, because of the way China’s internet functions. (MIT Technology Review

Five ways criminals are using AI
Artificial intelligence has brought a big boost in productivity—to the criminal underworld. Generative AI has made phishing, scamming, and doxxing easier than ever. (MIT Technology Review)

OpenAI is rebooting its robotics team
After disbanding its robotics team in 2020, the company is trying again. The resurrection is in part thanks to rapid advancements in robotics brought by generative AI. (Forbes

OpenAI found Russian and Chinese groups using its tech for propaganda campaigns
OpenAI said that it caught, and removed, groups from Russia, China, Iran, and Israel that were using its technology to try to influence political discourse around the world. But this is likely just the tip of the iceberg when it comes to how AI is being used to affect this year’s record-breaking number of elections. (The Washington Post

Inside Anthropic, the AI company betting that safety can be a winning strategy
The AI lab Anthropic, creator of the Claude model, was started by former OpenAI employees who resigned over “trust issues.” This profile is an interesting peek inside one of OpenAI’s competitors, showing how the ideology behind AI safety and effective altruism is guiding business decisions. (Time

AI-directed drones could help find lost hikers faster
Drones are already used for search and rescue, but planning their search paths is more art than science. AI could change that. (MIT Technology Review

How a simple circuit could offer an alternative to energy-intensive GPUs

On a table in his lab at the University of Pennsylvania, physicist Sam Dillavou has connected an array of breadboards via a web of brightly colored wires. The setup looks like a DIY home electronics project—and not a particularly elegant one. But this unassuming assembly, which contains 32 variable resistors, can learn to sort data like a machine-learning model.

While its current capability is rudimentary, the hope is that the prototype will offer a low-power alternative to the energy-guzzling graphical processing unit (GPU) chips widely used in machine learning. 

“Each resistor is simple and kind of meaningless on its own,” says Dillavou. “But when you put them in a network, you can train them to do a variety of things.”

breadboards connected in a grid
Sam Dillavou’s laboratory at the University of Pennsylvania is using circuits composed of resistors to perform simple machine learning classification tasks. 
FELICE MACERA

A task the circuit has performed: classifying flowers by properties such as petal length and width. When given these flower measurements, the circuit could sort them into three species of iris. This kind of activity is known as a “linear” classification problem, because when the iris information is plotted on a graph, the data can be cleanly divided into the correct categories using straight lines. In practice, the researchers represented the flower measurements as voltages, which they fed as input into the circuit. The circuit then produced an output voltage, which corresponded to one of the three species. 

This is a fundamentally different way of encoding data from the approach used in GPUs, which represent information as binary 1s and 0s. In this circuit, information can take on a maximum or minimum voltage or anything in between. The circuit classified 120 irises with 95% accuracy. 

Now the team has managed to make the circuit perform a more complex problem. In a preprint currently under review, the researchers have shown that it can perform a logic operation known as XOR, in which the circuit takes in two binary numbers and determines whether the inputs are the same. This is a “nonlinear” classification task, says Dillavou, and “nonlinearities are the secret sauce behind all machine learning.” 

Their demonstrations are a walk in the park for the devices you use every day. But that’s not the point: Dillavou and his colleagues built this circuit as an exploratory effort to find better computing designs. The computing industry faces an existential challenge as it strives to deliver ever more powerful machines. Between 2012 and 2018, the computing power required for cutting-edge AI models increased 300,000-fold. Now, training a large language model takes the same amount of energy as the annual consumption of more than a hundred US homes. Dillavou hopes that his design offers an alternative, more energy-efficient approach to building faster AI.

Training in pairs

To perform its various tasks correctly, the circuitry requires training, just like contemporary machine-learning models that run on conventional computing chips. ChatGPT, for example, learned to generate human-sounding text after being shown many instances of real human text; the circuit learned to predict which measurements corresponded to which type of iris after being shown flower measurements labeled with their species. 

Training the device involves using a second, identical circuit to “instruct” the first device. Both circuits start with the same resistance values for each of their 32 variable resistors. Dillavou feeds both circuits the same inputs—a voltage corresponding to, say, petal width—and adjusts the output voltage of the second circuit to correspond to the correct species. The first circuit receives feedback from that second circuit, and both circuits adjust their resistances so they converge on the same values. The cycle starts again with a new input, until the circuits have settled on a set of resistance levels that produce the correct output for the training examples. In essence, the team trains the device via a method known as supervised learning, where an AI model learns from labeled data to predict the labels for new examples.

It can help, Dillavou says, to think of the electric current in the circuit as water flowing through a network of pipes. The equations governing fluid flow are analogous to those governing electron flow and voltage. Voltage corresponds to fluid pressure, while electrical resistance corresponds to the pipe diameter. During training, the different “pipes” in the network adjust their diameter in various parts of the network in order to achieve the desired output pressure. In fact, early on, the team considered building the circuit out of water pipes rather than electronics. 

For Dillavou, one fascinating aspect of the circuit is what he calls its “emergent learning.” In a human, “every neuron is doing its own thing,” he says. “And then as an emergent phenomenon, you learn. You have behaviors. You ride a bike.” It’s similar in the circuit. Each resistor adjusts itself according to a simple rule, but collectively they “find” the answer to a more complicated question without any explicit instructions. 

A potential energy advantage

Dillavou’s prototype qualifies as a type of analog computer—one that encodes information along a continuum of values instead of the discrete 1s and 0s used in digital circuitry. The first computers were analog, but their digital counterparts superseded them after engineers developed fabrication techniques to squeeze more transistors onto digital chips to boost their speed. Still, experts have long known that as they increase in computational power, analog computers offer better energy efficiency than digital computers, says Aatmesh Shrivastava, an electrical engineer at Northeastern University. “The power efficiency benefits are not up for debate,” he says. However, he adds, analog signals are much noisier than digital ones, which make them ill suited for any computing tasks that require high precision.

In practice, Dillavou’s circuit hasn’t yet surpassed digital chips in energy efficiency. His team estimates that their design uses about 5 to 20 picojoules per resistor to generate a single output, where each resistor represents a single parameter in a neural network. Dillavou says this is about a tenth as efficient as state-of-the-art AI chips. But he says that the promise of the analog approach lies in scaling the circuit up, to increase its number of resistors and thus its computing power.

He explains the potential energy savings this way: Digital chips like GPUs expend energy per operation, so making a chip that can perform more operations per second just means a chip that uses more energy per second. In contrast, the energy usage of his analog computer is based on how long it is on. Should they make their computer twice as fast, it would also become twice as energy efficient. 

Dillavou’s circuit is also a type of neuromorphic computer, meaning one inspired by the brain. Like other neuromorphic schemes, the researchers’ circuitry doesn’t operate according to top-down instruction the way a conventional computer does. Instead, the resistors adjust their values in response to external feedback in a bottom-up approach, similar to how neurons respond to stimuli. In addition, the device does not have a dedicated component for memory. This could offer another energy efficiency advantage, since a conventional computer expends a significant amount of energy shuttling data between processor and memory. 

While researchers have already built a variety of neuromorphic machines based on different materials and designs, the most technologically mature designs are built on semiconducting chips. One example is Intel’s neuromorphic computer Loihi 2, to which the company began providing access for government, academic, and industry researchers in 2021. DeepSouth, a chip-based neuromorphic machine at Western Sydney University that is designed to be able to simulate the synapses of the human brain at scale, is scheduled to come online this year.

The machine-learning industry has shown interest in chip-based neuromorphic computing as well, with a San Francisco–based startup called Rain Neuromorphics raising $25 million in February. However, researchers still haven’t found a commercial application where neuromorphic computing definitively demonstrates an advantage over conventional computers. In the meantime, researchers like Dillavou’s team are putting forth new schemes to push the field forward. A few people in industry have expressed interest in his circuit. “People are most interested in the energy efficiency angle,” says Dillavou. 

But their design is still a prototype, with its energy savings unconfirmed. For their demonstrations, the team kept the circuit on breadboards because it’s “the easiest to work with and the quickest to change things,” says Dillavou, but the format suffers from all sorts of inefficiencies. They are testing their device on printed circuit boards to improve its energy efficiency, and they plan to scale up the design so it can perform more complicated tasks. It remains to be seen whether their clever idea can take hold out of the lab.

How QWERTY keyboards show the English dominance of tech

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Have you ever thought about the miraculous fact that despite the myriad differences between languages, virtually everyone uses the same QWERTY keyboards? Many languages have more or fewer than 26 letters in their alphabet—or no “alphabet” at all, like Chinese, which has tens of thousands of characters. Yet somehow everyone uses the same keyboard to communicate.

Last week, MIT Technology Review published an excerpt from a new book, The Chinese Computer, which talks about how this problem was solved in China. After generations of work to sort Chinese characters, modify computer parts, and create keyboard apps that automatically predict the next character, it is finally possible for any Chinese speaker to use a QWERTY keyboard. 

But the book doesn’t stop there. It ends with a bigger question about what this all means: Why is it necessary for speakers of non-Latin languages to adapt modern technologies for their uses, and what do their efforts contribute to computing technologies?

I talked to the book’s author, Tom Mullaney, a professor of history at Stanford University. We ended up geeking out over keyboards, computers, the English-centric design that underlies everything about computing, and even how keyboards affect emerging technologies like virtual reality. Here are some of his most fascinating answers, lightly edited for clarity and brevity. 

Mullaney’s book covers many experiments across multiple decades that ultimately made typing Chinese possible and efficient on a QWERTY keyboard, but a similar process has played out all around the world. Many countries with non-Latin languages had to work out how they could use a Western computer to input and process their own languages.

Mullaney: In the Chinese case—but also in Japanese, Korean, and many other non-Western writing systems—this wasn’t done for fun. It was done out of brute necessity because the dominant model of keyboard-based computing, born and raised in the English-speaking world, is not compatible with Chinese. It doesn’t work because the keyboard doesn’t have the necessary real estate. And the question became: I have a few dozen keys but 100,000 characters. How do I map one onto the other? 

Simply put, half of the population on Earth uses the QWERTY keyboard in ways the QWERTY keyboard was never intended to be used, creating a radically different way of interacting with computers.

The root of all of these problems is that computers were designed with English as the default language. So the way English works is just the way computers work today.

M: Every writing system on the planet throughout history is modular, meaning it’s built out of smaller pieces. But computing carefully, brilliantly, and understandably worked on one very specific kind of modularity: modularity as it functions in English. 

And then everybody else had to fit themselves into that modularity. Arabic letters connect, so you have to fix [the computer for it]; In South Asian scripts, the combination of a consonant and a vowel changes the shape of the letter overall—that’s not how modularity works in English. 

The English modularity is so fundamental in computing that non-Latin speakers are still grappling with the impacts today despite decades of hard work to change things.

Mullaney shared a complaint that Arabic speakers made in 2022 about Adobe InDesign, the most popular publishing design software. As recently as two years ago, pasting a string of Arabic text into the software could cause the text to become messed up, misplacing its diacritic marks, which are crucial for indicating phonetic features of the text. It turns out you need to install a Middle East version of the software and apply some deliberate workarounds to avoid the problem.

M: Latin alphabetic dominance is still alive and well; it has not been overthrown. And there’s a troubling question as to whether it can ever be overthrown. Some turn was made, some path taken that advantaged certain writing systems at a deep structural level and disadvantaged others. 

That deeply rooted English-centric design is why mainstream input methods never deviate too far from the keyboards that we all know and love/hate. In the English-speaking world, there have been numerous attempts to reimagine the way text input works. Technologies such as the T9 phone keyboard or the Palm Pilot handwriting alphabet briefly achieved some adoption. But they never stick for long because most developers snap back to QWERTY keyboards at the first opportunity.

M: T9 was born in the context of disability technology and was incorporated into the first mobile phones because button real estate was a major problem (prior to the BlackBerry reintroducing the QWERTY keyboard). It was a necessity; [developers] actually needed to think in a different way. But give me enough space, give me 12 inches by 14 inches, and I’ll default to a QWERTY keyboard.

Every 10 years or so, some Western tech company or inventor announces: “Everybody! I have finally figured out a more advanced way of inputting English at much higher speeds than the QWERTY keyboard.” And time and time again there is zero market appetite. 

Will the QWERTY keyboard stick around forever? After this conversation, I’m secretly hoping it won’t. Maybe it’s time for a change. With new technologies like VR headsets, and other gadgets on the horizon, there may come a time when QWERTY keyboards are not the first preference, and non-Latin languages may finally have a chance in shaping the new norm of human-computer interactions. 

M: It’s funny, because now as you go into augmented and virtual reality, Silicon Valley companies are like, “How do we overcome the interface problem?” Because you can shrink everything except the QWERTY keyboard. And what Western engineers fail to understand is that it’s not a tech problem—it’s a technological cultural problem. And they just don’t get it. They think that if they just invent the tech, it is going to take off. And thus far, it never has.

If I were a software or hardware developer, I would be hanging out in online role-playing games, just in the chat feature; I would be watching people use their TV remote controls to find the title of the film they’re looking for; I would look at how Roblox players chat with each other. It’s going to come from some arena outside the mainstream, because the mainstream is dominated by QWERTY.

What are other signs of the dominance of English in modern computing? I’d love to hear about the geeky details you’ve noticed. Send them to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Today marks the 35th anniversary of the student protests and subsequent massacre in Tiananmen Square in Beijing. 

  • For decades, Hong Kong was the hub for Tiananmen memorial events. That’s no longer the case, due to Beijing’s growing control over the city’s politics after the 2019 protests. (New Yorker $)
  • To preserve the legacy of the student protesters at Tiananmen, it’s also important to address ethical questions about how American universities and law enforcement have been treating college protesters this year. (The Nation)

2. A Chinese company that makes laser sensors was labeled by the US government as a security concern. A few months later, it discreetly rebranded as a Michigan-registered company called “American Lidar.” (Wall Street Journal $)

3. It’s a tough time to be a celebrity in China. An influencer dubbed “China’s Kim Kardashian” for his extravagant displays of wealth has just been banned by multiple social media platforms after the internet regulator announced an effort to clear out “​​ostentatious personas.” (Financial Times $)

  • Meanwhile, Taiwanese celebrities who also have large followings in China are increasingly finding themselves caught in political crossfires. (CNN)

4. Cases of Chinese students being rejected entry into the US reveals divisions within the Biden administration. Customs agents, who work for the Department of Homeland Security, have canceled an increasing number of student visas that had already been approved by the State Department. (Bloomberg $)

5. Palau, a small Pacific island nation that’s one of the few countries in the world that recognizes Taiwan as a sovereign country, says it is under cyberattack by China. (New York Times $)

6. After being the first space mission to collect samples from the moon’s far side, China’s Chang’e-6 lunar probe has begun its journey back to Earth. (BBC)

7. The Chinese government just set up the third and largest phase of its semiconductor investment fund to prop up its domestic chip industry. This one’s worth $47.5 billion. (Bloomberg $)

Lost in translation

The Chinese generative AI community has been stirred up by the first discovery of a Western large language model plagiarizing a Chinese one, according to the Chinese publication PingWest

Last week, two undergraduate computer science students at Stanford University released an open-source model called Llama 3-V that they claimed is more powerful than LLMs made by OpenAI and Google, while costing less. But Chinese AI researchers soon found out that Llama 3-V had copied the structure, configuration files, and code from MiniCPM-Llama3-V 2.5, another open-source LLM developed by China’s Tsinghua University and ModelBest Inc, a Chinese startup. 

What proved the plagiarism was the fact that the Chinese team secretly trained the model on a collection of Chinese writings on bamboo slips from 2000 years ago, and no other LLMs can recognize the Chinese characters in this ancient writing style accurately. But Llama 3-V could recognize these characters as well as MiniCPM, while making the exact same mistakes as the Chinese model. The students who released Llama 3-V have removed the model and apologized to the Chinese team, but the incident is seen as proof of the rapidly improving capabilities of homegrown LLMs by the Chinese AI community. 

One more thing

Hand-crafted squishy toys (or pressure balls) in the shape of cute animals or desserts have become the latest viral products on Chinese social media. Made in small quantities and sold in limited batches, some of them go for up to $200 per toy on secondhand marketplaces. I mean, they are cute for sure, but I’m afraid the idea of spending $200 on a pressure ball only increases my anxiety.

Charts: Business Adoption of AI Q2 2024

Business adoption of artificial intelligence has surged globally in the past year, according to a study released last month by McKinsey & Company titled “The state of AI in early 2024.”

In February and March, McKinsey surveyed 1,363 executives from various global regions, industries, company sizes, functional specialties, and tenures. Roughly 72% of respondents reported using at least one AI application.

Per the survey, respondents report adopting generative AI mainly in marketing and sales, product and service development, and information technology departments.

Most respondents reported spending less than 6% of the total digital budget on generative AI despite its widespread adoption.

Moreover, the survey also examined the cost savings from generative AI across different business functions.

Rand Fishkin At MozCon: Rethinking Strategies Amid Google API “Leak” via @sejournal, @MattGSouthern

At the MozCon industry conference this week, Rand Fishkin, the outspoken former CEO of Moz and founder of SparkToro, shared his opinion on how SEOs and marketers should potentially adjust strategies based on his interpretation of the recent Google API leaks.

In a packed session with Dr. Pete Meyers, Fishkin laid out specific ways he believes the leaked information, which has not been verified, could impact best practices.

Fishkin firmly believes the leaks contradict Google’s public statements about its systems.

“Google has been unkind and unfair. They have been abusive about this,” Fishkin stated, though these are his opinions based on reviewing the leaks.

On Google’s lack of transparency, Fishkin states:

“Google has told us off and on that they don’t use clicks for ranking. And I always heard it, maybe this is charitable on my part, as we don’t use capital ‘C’ clicks for capital ‘R’ ranking. And the truth is, I think even that was charitable on my case.

And we’ve seen in not just these documents, but anyone who’s familiar with Andrew Navick’s testimony last year, it’s really confirming a lot of what we saw, a lot of what we saw with Navboost.”

He adds:

“They have lied through either omission or misinformation.”

Fishkin’s Recommendations

Fishkin admittedly speculated and provided concrete examples of how SEO strategies could change if his interpretations of the leaks were accurate.

However, these are his opinions, not directives. Among his potential recommendations:

1. Invest In Author/Entity Authority

Surprised by the continued emphasis on authorship and entity signals in the leaked code, Fishkin said brands should prioritize hiring writers with established reputational authority that Google already associates with quality content.

Fishkin said this is what he’s going to do differently:

“We’re going to hire a content marketer, basically a part-time content person, to make sure that the SparkToro blog has a couple of new posts on it every week.

And all that authorship and entity stuff made me think we should find someone who already has a profile.”

2. Supplement Link-building With Public Relations

According to Fishkin, the leaks uncovered potential evidence that Google devalues links to sites without sufficient brand awareness and search volume.

As a result, he recommends accompanying traditional link acquisition with broader brand-building efforts like PR and advertising to increase branded search demand.

Fishkin stated:

“If you get a whole bunch of links in one day and nothing else, guess what? You manipulated the link graph.

If you’re really a big brand, people should be talking about you.”

3. Embrace Geographic Nuance

With abundant references to geographic and country-specific signals throughout the code, Fishkin cautioned against one-size-fits-all global strategies.

What works for major markets like the US may prove ineffective for smaller regions where Google needs more data.

Fishkin advised attendees:

“I would encourage you to think about SEO as being more geographically specific than you think it is even for web search results.”

4. Rediscover Experimentation

More than anything, Fishkin hopes the leaks will catalyze a renewed sense of curiosity and skepticism within SEO.

On the value of experimentation, Fishkin says:

“We’ve seen it over and over. One thing we’ve lost, I feel like, is that spirit of experimentation. And with these things coming out where I don’t think we can take what Google says for granted, how do you see, how do we get that back?”

He challenged practitioners to move beyond regurgitating Google’s public statements and instead embrace testing to uncover what drives results.

Referring to an unexplained metric surfaced in the leaks, Fishkin states:

“My dream would be that if I were to come back to MozCon next year, somebody would be on this stage, and they’d be like, ‘Guys, I figured out what Keto score is. Publish that. I’ll amplify it.”

A Wakeup Call?

In many ways, Fishkin framed the leaks as a pivotal moment for an industry he believes has grown insular, conflict-averse, and too accepting of Google’s carefully crafted narratives.

His call to action left some energized and others put off by its unrestrained bluntness.

But whether one admires Fishkin’s brash delivery or not, the leaks have undeniably cracked open Google’s black box.

For those willing to dig into the technical details and chart their path through testing, Fishkin argues lucrative opportunities await those who stop taking Google’s word as gospel.

A Word Of Caution Regarding The Google API Leak

Doubts have emerged about the true nature and significance of this “leak.”

Evidence suggests the data may be connected to Google’s public Document AI Warehouse API rather than exposing the ranking system’s inner workings. The information also appears to be at least five years old.

While Fishkin’s plans to adjust his SEO tactics are interesting, they should be taken with a grain of salt, given the ongoing debate over what the data really signifies.

It illustrates the importance of vetting sources when evaluating any supposed “insider information” about how search engines operate.

As the discussion around the Google “leak” continues, be careful not to fall victim to confirmation bias—seeing the data through the lens of pre-existing theories rather than objectively assessing it.


Featured Image: Taken by author at MozCon, June 2024. 

Mastering Content Mapping: A Strategic Approach To Boost SEO via @sejournal, @AdamHeitzman

Successful SEO content should act as a guide – not just around your website, but for users on their specific journey to answers and solutions.

Visitors to your website arrive with different intentions and levels of awareness.

Some may be casually browsing your site for the first time, others might be actively researching your products or services, and others may be on the verge of making a purchase.

So, what should you do to ensure your website caters to these diverse goals and needs? And how should you use content to facilitate the journey from early-stage prospect to repeat customer?

The answer: content mapping.

In this post, we’ll cover the importance of content mapping for SEO and explain how to develop a content strategy that helps transform visitors into customers.

Let’s dive in.

What Is Content Mapping?

Content mapping is the process of aligning content with the specific needs of your target audience at various stages of the buyer’s journey.

a chart of the buyer's journey, moving from

It involves defining your audience, understanding what content they need at different phases of the decision-making process, and delivering that content to guide them from initial awareness to a purchase decision (and beyond).

In other words, content planning is about organizing your site so that the right content reaches the right user at the right time.

Why Is Content Mapping Important For SEO?

Creating a content map can bolster your SEO efforts in multiple ways. Here are the key reasons content mapping matters for SEO.

Better Content Relevance And Targeting

Google aims to surface the most relevant results it can find for users’ search queries.

The more relevant the content, the more likely it is to rank higher in search results.

Content mapping helps increase your content’s relevance by aligning it with the search intentions of your target audience at different stages of the buying journey.

Increased User Engagement

Relevant content that directly addresses the goals and needs of visitors is also more engaging.

Better engagement means longer session durations, lower bounce rates, and more frequent interactions – all of which signal to Google that your site is a quality source of content worth ranking prominently in its search results.

Improved Keyword Optimization

Content mapping involves identifying the keywords your prospects use along the buyer’s journey.

Once you know which keywords people use at various stages of the customer lifecycle, you can incorporate those terms into the appropriate pages on your website.

Targeting a broad range of keywords lets you cater to a broader range of user intentions and boosts your organic visibility.

Structured Content Delivery

A well-planned content map helps create a clear and organized content architecture for your site.

A logical site structure improves user experience and makes it easier for search engines to crawl and index your pages correctly.

Moreover, strategic linking between thematically-related content can enhance your site’s authority for particular topics.

Support For Content Updates

A content map provides a complete overview of your site’s content, laying out the relationship between different pieces of content and their respective stages in the buyer’s journey.

With a map in hand, it’s easier to see which parts of your site are outdated or underperforming and to spot new opportunities to develop more content.

Remember, search engines favor fresh, up-to-date content, which can help maintain or improve your rankings over time.

How To Create An Effective Content Map In 6 Steps

Now, let’s break down the content mapping process.

1. Define Your Buyer Personas

The first step is to develop a clear understanding of your various customer segments.

The better you understand your ideal customers’ needs, goals, motivations, and objections, the easier it will be to create content that serves them.

The goal here is to create buyer personas (or customer avatars) that represent your target audience.

Start by gathering as much information as possible on your existing customers through surveys, interviews, customer service data, and website analytics.

Look for demographic data such as age, location, income level, and psychographic data, including personal interests, buying preferences, and lifestyle choices. Learn the challenges that lead them to seek products and services like yours.

Once you’ve collected this data, it’s time to build out your buyer personas. Each persona should be detailed and specific, like this:

Buyer Persona: “Digital Marketer Dan”

  • Demographics: Age 32, lives in San Francisco, single, masters in marketing.
  • Professional role: Senior digital marketing manager at a tech startup.
  • Goal: To increase brand awareness and lead generation through innovative digital strategies.
  • Challenges: Struggles with limited budget and resources; needs tools that maximize ROI.
  • Behavioral traits: Highly active on LinkedIn, relies on industry blogs and webinars for ongoing education.
  • Decision factors: Prioritizes cost-effective software and tools with strong user support.
  • Preferred content types: Enjoys in-depth guides, case studies, and free webinars.

Here is a sample buyer persona we developed for my agency, HigherVisibility:

A sample buyer persona the author developed for his agency, HigherVisibility.Screenshot from author, May 2024

2. Map Out The Buyer’s Journey

The next step is to map out the customer journey for each of your personas. This will guide you in creating content that meets the needs of different personas at every stage of the decision-making process.

Typically, the customer journey is made up of three main phases:

  • Awareness: Here, the persona first becomes aware of a problem or need. For example, “Digital Marketer Dan” realizes he needs to improve his digital marketing ROI. He heads to Google for ideas and clicks through to one of your blog’s educational posts discussing various digital marketing strategies.
  • Consideration: At this point, your persona will be evaluating different solutions to their more fundamental problem. For example, Dan has realized that his team needs a new digital marketing platform to optimize campaigns more effectively. Now, he’s researching the features and benefits of different tools to see which solution can best meet his needs. To get a better sense of what your company offers, Dan reads through a few of your comparison guides and watches your product demonstration videos.
  • Decision: In this final phase, your persona has considered their options and is primed to make a buying decision. For example, Dan is now almost ready to buy. To alleviate any last-minute concerns, he reads through positive feedback from other customers on your testimonials page and then finally reaches for his bank card.

Read more: How To Create A Customer Journey Map

3. Map Existing Content To Different Stages Of The Buyer’s Journey

Now, it’s time to categorize your existing content according to where it fits into the customer journey.

A common approach here is to separate content into three buckets:

  • Top of the funnel (TOFU): This content caters to visitors in the awareness stage. The goal here isn’t to make a sales pitch. Instead, it’s about providing useful information, establishing your brand as a trustworthy source of insights, and raising awareness that your solution exists. TOFU content includes blog posts, infographics, and educational videos about topics within the remit of your brand.
  • Middle of the funnel (MOFU): This content caters to those in the consideration stage. It delves deeper into the benefits and features of your solution, showing visitors how it can help address their problems. Examples include webinars, case studies, and product overviews.
  • Bottom of the funnel (BOFU): This content is designed to help purchase-ready visitors commit to a final decision. Since users are now familiar with what you can offer them, the goal is to address any lingering objections or questions standing in the way of a sale. BOFU content includes special offers, free trials, shipping and refund policy pages, customer reviews, and testimonials.

4. Optimize Existing Pages For The Right Keywords

After identifying where each page fits into the customer journey, you’ll need to ensure those pages are optimized for the appropriate keywords.

But before you get started, we recommend you first refresh and expand your current keyword list to make it as up-to-date and comprehensive as possible.

The aim is to build a complete picture of the type of queries people use when searching for TOFU, MOFU, and BOFU content related to your business. Be sure to include search intent and search volume data in your list.

Once your new keyword list is ready, make sure each page is optimized for a relevant primary keyword if it isn’t already.

Here are some best practices to keep in mind when optimizing your content for keywords:

  • Match pages with keyword intent. Only assign a keyword to a page that genuinely addresses the search intent underlying that term. If you don’t yet have a page for a given keyword, you can add it to your content calendar (see Step 6).
  • Update meta tags. Adjust the title tags, meta descriptions, and header tags to include your target keyword and any close variants. This not only helps Google understand the page’s relevance but can also encourage more click-throughs from the search results.
  • Add keywords to the content. Integrate your target keyword into your content, especially the introduction, conclusion, and headers. All keyword placements should appear natural to the reader. Also, avoid cramming your text with keywords – keyword stuffing is considered spam by search engines and can damage your SEO efforts.
  • Avoid keyword cannibalization. When two or more pages from the same website target the same term, search engines have difficulty determining which page to rank. This can dilute the ranking potential of all the competing pages. So, try to make sure that no two pages target the same primary keyword.

Read more: Keyword Research: An In-Depth Beginner’s Guide

5. Update Internal Links Strategically

Internal links are the routes your users take from one page to another. Not only are they essential for site navigation, but they also help distribute link equity (ranking power) from high-authority pages to lower-authority ones.

However, since your website’s ultimate goal is to generate conversions, each page should contain links that direct users to the next step along the buyer’s journey.

For example, suppose your business sells and installs smart home technology.

Your blog post about “Improving Home Security” (TOFU) might contain an in-text link to a related comparison guide about “The Best Smart Home Security Systems of 2024” (MOFU). Likewise, the comparison guide might contain a link directing readers to your testimonials page (BOFU).

6. Identify Content Gaps To Inform Your Editorial Calendar

The next step is to identify opportunities to create new content or add to existing content.

After expanding your keyword list in Step 4, you will most likely find several relevant keywords for which you don’t currently have content. These keywords can form the basis of your new content calendar.

When deciding which keywords to prioritize in your production schedule, keep the following in mind:

  • Marketing goals: What is your primary marketing goal? If it’s to generate more traffic and brand awareness, targeting TOFU keywords should be your priority. If the aim is to maximize sales conversion rates, targeting BOFU keywords might be a better option.
  • Relevance to your audience: Prioritize keywords that reflect your core audience’s needs. Ask whether there are any topics or questions your buyer personas are likely to have that your content doesn’t currently address.
  • Search volume and competition: High-volume keywords are typically harder to rank for. Consider creating content clusters to increase your site’s authority for specific topics. Content clusters involve a central “pillar” page providing an overview of the main topic, which links to multiple “cluster” pieces covering related sub-topics in more depth.

Final Thoughts

Creating a content map is a powerful strategy for any business looking to increase their organic visibility and maximize their website’s ability to attract qualified visitors and convert them into customers.

By understanding and addressing the various needs of your audience throughout the buyer’s journey, you can position your brand as a valuable source of information and guide visitors from initial awareness to making a purchasing decision.

Building a content map requires careful planning and attention to detail. You should treat it as a living document that evolves with your audience’s needs and business goals.

Regularly reviewing and updating your content map will ensure it remains relevant and effective in guiding your content strategy.

More resources:


Featured Image: watchara panyajun/Shutterstock

Your Guide To Dominating Local Search Marketing via @sejournal, @meetsoci

This post was sponsored by SOCi. The opinions expressed in this article are the sponsor’s own.

As a marketer, you may feel like the ground is shifting under your feet with so many changes in the world of search. From Google’s recent announcement to release AI Overviews to all U.S. users to OpenAI revealing GPT-4o, there’s a lot to keep up with.

How will these changes impact your search efforts? Do you need to shift your search strategy?

We have the answers for you and more!

In this blog, we’ll explain how search marketing has changed, what this means for your brand, and share tactics to improve your online visibility. At the end, we’ll also introduce our new game-changer for local search management.

Let’s get into it!

The Evolution Of Search Marketing

As search evolves, many marketers are worried about their brand remaining visible online. While AI-generated search experiences are so new, we do know that now isn’t the time to make any drastic changes to your search marketing strategies.

You can test how your brand appears in generative AI (genAI) results (what we’ve dubbed GAIRs), but there’s no reason to sound an alarm — at least not yet.

Today, nearly three-quarters of consumers conduct local searches at least once a week. Similarly, in the U.S., over 800 million monthly searches contain some variation of “near me,” and more than 5 million keywords are related to “near me.”

Focusing on conventional local SEO efforts is the best way for your brand to ensure its visibility in traditional and GAIRs.

Local SEO for businesses with multiple locations involves incorporating a local SEO strategy for each business location. A multi-location SEO strategy, when done correctly, will boost your local search rankings, help you gain local customers, and improve brand awareness.

If your business doesn’t have multiple locations, you can still follow the tactics below to ensure your business is visible to your target audience in your specific area.

5 Ways To Improve Your Online Visibility

Now that you understand how search has evolved and the importance of local SEO, let’s dive into five local SEO tactics your brand can leverage to boost online visibility.

1. Claim & Optimize Local Listings

Local listings are online profiles of local businesses. They appear on search engines, local directories, and platforms like Google, Apple Maps, Yelp, Bing, and Facebook.

To increase your visibility on Google and beyond, your brand must claim local listings across all major local directories and remove duplicate listings.

Additionally, you need consistent and accurate information across all listings. At a minimum, your local listings should include the following information:

  • Name, address, and phone number (NAP) citations.
  • Business categories. (Example: Sushi restaurant)
  • Business hours, especially during holidays and major events
  • Products and services your business offers.
  • Links to your website and social media profiles.
  • Attributes. (Example: Curbside pickup or wheelchair-accessible seating)
  • High-quality photos and videos.

After optimizing your local listings, you can focus on your local pages.

2. Create Local Pages For Each Location

A local page, sometimes called a local landing page, is a web page you create for an individual store location or franchisee. It’s similar to local listings but lives on your site rather than an external directory like Yelp or Google.

Your multi-location business might have dozens or hundreds of local pages, each containing specific information about that store and the surrounding area.

Local pages should contain most of the business information found on your local listings. However, they’re also high-conversion pages. Therefore, they should also contain calls to action (CTAs) such as “order now” buttons or promotional sales and discounts.

Well-designed and optimized local pages can help your business appear high in local organic search results. As mentioned, these higher rankings often lead to more conversions and business for your stores!

3. Leverage A Store Locator

Store locators are similar to local pages. A store locator is a web page that lists all of your local stores or third-party dealers that sell your products.

Store locators help move website visitors through the customer journey by displaying valuable location information and unique details about each store. They make it easier for customers to purchase online and to contact or visit local stores.

Well-optimized and compatible store locators and local pages will help improve:

  • Local search rankings.
  • Website traffic and online conversions.
  • Analytics, such as where visitors are searching and coming from.

4. Implement An Online Reputation Management Strategy

While reputation management might not be something you’d consider when you think of improving your online visibility, you’d be surprised. According to local SEO experts, high numerical Google ratings are the sixth highest ranking factor in Google’s local pack and finder. At the same time, the quantity of native Google reviews (with text) is the eighth ranking factor.

A high quantity and quality of reviews don’t just affect local search rankings — they also impact conversion rates. According to our State of Google Reviews research report, an increase in one full star on a Google Business Profile (GBP) corresponds with a 44% increase in conversions.

To improve your reputation management strategy and gain more reviews:

  1. Respond to existing reviews in a personalized manner to show customers you value their feedback.
  2. Utilize social media to encourage customer feedback, ratings, and reviews.
  3. Make leaving a review accessible! Include links to your GBP on your website and in emails.
  4. Monitor the feedback that your business receives from reviews and make adjustments accordingly.

5. Create Unique Content

Generating localized content for your local pages, website, and listings is also essential. You want to ensure that your localized content optimizes and targets specific areas.

For instance, if you’re targeting the keyword “sporting goods store Seattle,” you want to update your URL, title tag, meta description, and headings with locally relevant keywords.

You should also leverage local images, including photos of your stores and products. Remember to include geo-targeted meta descriptions, alternative text, and descriptions within your images.

Types of local content your brand can create include but are not limited to:

  • Blogs.
  • Surveys.
  • Infographics.
  • Whitepapers.
  • Social media content.
  • Neighborhood guides.
  • User-generated content. (UGC)

For a more in-depth look at what it takes to improve your brand’s local SEO strategy, download our Top 10 Things You Should Be Doing in Local SEO Now guide!

How SOCi Can Help

Now that you understand what goes into creating a solid local search strategy, it’s time to boost your brand’s visibility. As marketers, you get how crucial search marketing is, but let’s be real, coming up with a plan to roll it out on a big scale is easier said than done.

That’s where SOCi comes in! We’ve built SOCi for more than a decade to ensure multi-location businesses rank well on local search and social media platforms, can create engaging content, and have the ability to manage each location’s online reputation.

We’ve enhanced our CoMarketing Cloud with SOCi Genius, an AI automation layer to help automate all of your daily localized marketing tasks. As part of SOCi Genius, we recently released Genius Search, a game-changer in search marketing!

As the newest innovation within the CoMarketing Cloud, Genius Search transcends traditional listings management by offering a dynamic, data-driven local search strategy that aligns with evolving consumer behaviors and market trends.

Genius Search uses the top data signals, such as reviews, search keywords and volume, weather, holidays, and others to deliver monthly AI-powered recommendations that can be accepted with the click of a button. Once accepted, these optimizations instantly improve your business listings’ rankings to directly relate to each location’s community.

It’s time to level up your local search strategy, and SOCi is here to help. Request a personalized demo today for more insight on Genius Search and our other Genius products!

Ready to start optimizing your website? Sign up for SOCi and get the data you need to deliver great user experiences.


Image Credits

Featured Image: Image by SOCi. Used with permission.

SEO In Crisis? Moz Search Scientist Warns Of Challenges Ahead via @sejournal, @MattGSouthern

Are the days of organic SEO numbered? That’s the idea raised by a search scientist’s assessment of Google’s AI-powered disruptions.

At MozCon’s 20th annual conference, Tom Capper, Moz’s Senior Search Scientist, provided a data-driven reality check.

Capper warned attendees

“At the end of this talk, I’m going to tell you that full-funnel organic marketing is borderline impossible in 2024 for most businesses.”

He examined how Google’s AI overview results, aggressive monetization, and evolving search intents pose challenges for companies relying on SEO.

Additionally, in an exclusive interview with Search Engine Journal, Capper highlighted potential paths forward for those willing to pivot.

Photo taken by author at MozCon, June 2024.

The Zero-Click Threat

Capper opened by chronicling the rise of search “intents” like informational, navigational, commercial, and transactional queries.

Google’s new AI Overview feature, which generates direct answers at the top of the page, has proven particularly disruptive for informational searches.

“Organic is a really tough game for informational intent,” said Capper, displaying data that informational searches have the lowest share of voice for traditional organic results due to AI Overviews and other SERP features.

Photo taken by author at MozCon, June 2024.

He also noted 21% of informational searches now surface a Featured Snippet result, which can satisfy users without a click.

“You basically can’t play at the top of the funnel,” he stated bluntly.

AI Overviews A “Mistake”

In the exclusive interview, Capper cautioned that Google’s rush to implement AI overviews could negatively impact the company’s brand image:

“I think Google has gone too soon and rushed this, and yeah, I do think it’s a mistake. That is a little bit dangerous for SEO in that if Google suffers, then that’s disruptive for our industry as well.”

The Commercial Battleground

While the data is dire for informational content, Capper says commercial searches represent a “sweet spot.”

However, these valuable mid-funnel queries have become a “turbulent” and “incredibly contested” battleground.

Weighing in on the Google product reviews update and other recent changes, Capper said:

“Commercial is where a lot of this [Google’s search quality issues] plays out…it’s become an incredibly volatile section.”

Major sites like Amazon, Reddit, and YouTube dominate commercial results alongside a glut of price listings and review rich results. This raises the bar for smaller sites trying to rank.

“There are arts, hobbies, real estate – much more realistic to try and compete in here,” Capper advised.

He warned publishers who rely solely on easily answered questions,

“If that’s what you’ve been doing, you’ve probably been suffering for a long time…If you’re not willing to pivot to any other kind of content, then yeah, sure, go. Find a different channel.”

The Paid & Local Future?

At the bottom funnel, Capper described transactional searches as “pay-to-play unless you’re a brick-and-mortar business.”

Google’s monetization of product listings and its experimental map embeds for transactional queries continue to squeeze out organic visibility.

However, Capper highlighted local SEO as a promising path forward, stating:

“If you can do well in local search, I think even in a worst-case scenario AI Overview rollout, you would still be doing well here.”

Adapting To The Changing Landscape

Despite the challenges posed by AI-powered search features, Capper believes there are still opportunities for organic marketing success.

He offers the following recommendations:

  • Target informational queries that don’t have a featured snippet, allowing for better organic visibility.
  • Focus on less competitive commercial queries in verticals like arts, hobbies, and real estate.
  • Leverage local search optimization for transactional queries, even for businesses without a brick-and-mortar presence.
  • Use keyword modifiers like “best,” “compare,” “top,” and “reviews” to identify commercial intent queries.
Photo taken by author at MozCon, June 2024.

Looking To The Future

When asked about his advice for SEO professionals who may be disheartened by the AI search revolution, Capper suggests adapting and focusing on creating high-quality, authoritative content.

Capper stated in the exclusive interview:

“If you’ve got any willingness at all to write something more interesting, then I think you can still play an organic.”

Ultimately, Capper remains optimistic about the future of organic search.

In the interview, he points Google’s business model depends on sending organic traffic to other sites:

“I don’t think Google will ever reach the point where Google doesn’t send traffic at all because, ultimately, that’s its business model.

People expect when they search Google that they will end up going to other websites; if people don’t have that expectation, they won’t click on ads; if people aren’t clicking on ads, Google doesn’t make any money.”

In Summary

While informational and transactional searches have become challenging to rank for organically, Capper’s research suggests there are opportunities in commercial and local spaces.

To adapt, he recommends focusing on less competitive commercial topics, leveraging local SEO for transactional queries, and creating content beyond simply answering basic questions.


Featured Image: KieferPix/Shutterstock

New Ecommerce Tools: June 4, 2024

This week’s list of new ecommerce tools for merchants includes updates on drone deliveries, social media marketing, point-of-sale systems, conversational AI, fulfillment, brand-influencer partnerships, and multichannel management.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants: June 4

Pinterest expands inclusion fund with Shopify to include small businesses. The Pinterest Inclusion Fund and Shopify’s Build Black and Build Native initiatives have joined forces to support small businesses, independent publishers, and boutique creator agencies from historically marginalized communities.

Pinterest Inclusion Fund.

Amazon expands drone deliveries following FAA approval. The U.S. Federal Aviation Administration has given Amazon Prime Air additional permissions to operate its drones beyond visual line of sight, enabling it to serve more customers via drone and scale its delivery operations. Later this year, drone deliveries will begin integrating into Amazon’s delivery network, meaning drones will deploy from facilities next to its same-day delivery sites.

Walmart expands its InHome delivery service. Walmart is expanding its InHome delivery service to an additional 10 million U.S. households, including those in California. Customers in San Bernardino, Riverside, and Ontario can now access the delivery convenience. In addition, Walmart is expanding the service to include customers in Boston, Detroit, Minneapolis, and Philadelphia, bringing the total scale of InHome to more than 50 markets covering over 45 million homes.

Klaviyo launches TikTok integration to boost ad efficiency. Klaviyo has announced an integration with TikTok, enabling businesses to sync their Klaviyo lists and segments to TikTok Audiences. The capability empowers brands to enhance their return on ad spend, deliver personalized advertisements, and build high-impact lookalike audiences to attract new buyers.

Home page of Klaviyo

Klaviyo

Razorpay POS launches Q-Zap for queueless in-store shopping. Razorpay, an India-based payment platform, has introduced Q-Zap, a comprehensive suite designed to revolutionize in-store payments with Razorpay POS. Q-Zap features handheld devices to accept cards and Unified Payments Interface anywhere in stores, kiosks, and vending machines for self-checkout, plug-and-play integrations with over 80 billing partners, and customized backend integrations and front-end apps.

PayPal to build an advertising platform and accelerate consumer product innovation. PayPal has announced leadership appointments to create an advertising platform to help merchants sell more products and services. According to PayPal, the new advertising business will use customer insights to build a personalized platform to drive better performance for merchants while presenting consumers with compelling offers.

New Rakuten+ to connect brands with high-value shoppers. Rakuten, a Tokyo-based cash-back shopping platform, has announced Rakuten+, a membership program for designer brands to connect with engaged Rakuten members. Rakuten+ will offer brands access to its most loyal and active shoppers of designer fashion. These curated shoppers will earn at least 10% cash back on participating brands and retailers year-round, providing designer brands with an opportunity to grow their audiences.

Home page of Rakuten+

Rakuten

Scurri acquires conversational AI platform HelloDone. Delivery management platform Scurri has announced the acquisition of HelloDone, a U.K.-based conversational AI platform for retailers and brands. HelloDone’s platform answers post-purchase customer queries on channels such as WhatsApp, Instagram, and Facebook Messenger. This enables retailers to automate responses via AI-powered chat for immediate resolutions at scale.

Baidu introduces tools for its mobile ecosystem. China-based online search giant Baidu has announced advances across its mobile products, including Search, Wenku, and the Baidu App — all integrated with large language models and Ernie agents. The revamped Wenku, an integrated platform for document archiving and content creation and sharing, provides over 100 multimodal AI features.

Logie launches AI tools for brand-influencer campaigns. Logie, an AI-powered platform for brand-influencer partnerships, has announced tools to create partnerships for Amazon Prime Day. According to Logie, its intelligent platform leverages AI to identify the most relevant influencers for each brand. This ensures brands align with creators who have the right audience, brand affinity, and content style to deliver impactful campaigns. Logie also empowers creators with access to brand collaborations within their niche.

Shopware and ChannelEngine launch Shopware Multichannel Connect. Shopware, an open-source ecommerce platform, and ChannelEngine, a marketplace integration and management platform, have launched Shopware Multichannel Connect, enabling Shopware merchants to sell across online marketplaces. Shopware Multichannel Connect automates sales on over 950 marketplaces (e.g., Amazon, Zalando, Bol, Otto, and eBay). The integration simplifies the management of product information, inventory, orders, and pricing from a central location.

Shopware home page

Shopware

The Rise Of Reddit: How You Can Leverage The Platform That’s Revolutionizing Search via @sejournal, @hethr_campbell

It’s no secret that Reddit is making major waves in the digital marketing landscape.

But what does that mean for your strategy?

Join us live on June 12 as we dive into how Reddit is shaping the future of search and how you can leverage it to your advantage. 

With Google investing $60 million to access Reddit’s real-time content and OpenAI integrating it into ChatGPT, Reddit’s visibility on search engines and AI platforms has skyrocketed. 

So if your goal is to get your brand and content in front of evolving search audiences, knowing how to navigate Reddit is now essential. 

In this insightful webinar, our Managing Partner and Co-Owner, Brent Csutoras, will lead an engaging discussion about how to strategically position your brand on Reddit to capitalize on its growing influence.

As a Reddit expert with over 18 years of experience on the platform, Brent will guide you through how you can effectively navigate Subreddits and engage with communities without violating platform rules.

Here are some key talking points we’ll cover during the presentation: 

  • Reddit as an Influencer Over the Years: Explore Reddit’s major impact on SERPs, as well as its evolution.
  • Recent Changes and Partnerships: Discuss Google’s and OpenAI’s recent investments and how they enhance Reddit’s significance.
  • Understanding Reddit: Get a comprehensive look at the platform, how it works, how to approach it, and how to become an active member.
  • Engagement Opportunities: Identify and outline various opportunities on Reddit.

Reddit has always been a goldmine for real-time conversations and target audiences, but recent developments have catapulted its significance to new heights. 

Don’t miss this chance to discover Reddit best practices and unlock the power of authentic engagement in the AI-powered search era.

Sign up now and learn how taking advantage of this platform can help you captivate your target audience and boost brand visibility.

Be sure to stay for the live Q&A session, as Brent will be answering your most pressing Reddit questions after the presentation. 

Can’t make it on the 12th? We’ve got you covered! Simply register here and we’ll send you a recording of the webinar to view at your convenience.