Google is upgrading its experimental AI service Bard by rebranding it as Gemini and introducing a new AI model called Ultra 1.0.
Google is also launching a mobile app for Gemini.
Sissie Hsiao, Vice President and General Manager of Gemini, shares how people have interacted with the AI since Bard’s launch in an announcement.
“People all over the world have used it to collaborate with AI in a completely new way,” Hsiao stated, highlighting the diverse applications from job interview preparation to creative image generation.
Introducing Gemini Advanced
Google has released a new iteration of Gemini called Gemini Advanced.
It utilizes Google’s latest AI model, Ultra 1.0, which the company describes as its most capable AI system.
Gemini Advanced is designed to excel at complex tasks like coding, logical reasoning, and creative work. It can maintain long conversations and understand context from previous interactions.
Google states that Gemini Advanced can be a personal tutor, provide coding advice, and help content creators generate new ideas.
As Google continues developing Gemini Advanced, users can expect ongoing improvements, including new features, multimodal capabilities, interactive coding, and data analysis tools.
The service is now available in over 150 countries in English, with plans to add more languages.
Google One AI Premium Plan
Google has announced the launch of Gemini Advanced alongside a new premium subscription plan called Google One AI Premium.
This new plan is priced at $19.99 per month and includes all the existing Google One Premium subscription features, like 2TB of cloud storage and access to Google’s latest AI advancements.
With the new plan, subscribers will soon be able to use Gemini technology within Google’s productivity tools, including Gmail, Google Docs, Slides, and Sheets.
Mobile Access To Gemini
In response to user demand for mobile accessibility, Google is rolling out a new app for Android and an integration within the Google app on iOS.
Gemini will be integrated with Google Assistant on Android devices for a seamless experience and voice control over connected home devices. The iOS Google app will soon offer comparable capabilities.
Rollout & Future Expansion
The Gemini app is now available on Android, and the integration within the Google app on iOS will follow in the coming weeks. The app will initially support English, with Japanese and Korean languages to be added soon. Additional country rollouts and language support are planned.
Google notes that users are encouraged to try Gemini and provide feedback to help improve the experience. The company states it remains committed to responsible AI development, including extensive safety testing and efforts to address biases and unsafe content, as per its published AI Principles.
Google has announced the broad integration of Gemini across many of its products and services, marking a milestone in making AI available in everyday applications.
Gemini represents Google’s “state of the art” system that outperforms humans in testing across areas like language, image, audio and video understanding.
The largest Gemini model, Ultra 1.0, scored higher than human experts on tests that evaluate knowledge and problem-solving abilities across 57 diverse subjects.
“Today we’re taking our next step and bringing Ultra to our products and the world,” said Sundar Pichai, CEO of Google and Alphabet, in a blog post.
Introducing ‘Gemini Advanced’
One way users can access Gemini is through Google’s experimental chatbot ‘Bard,’ which will now be called simply Gemini. A more advanced version called ‘Gemini Advanced’ will also launch, available initially through Google’s new ‘Google One AI Premium’ subscription.
Gemini Advanced taps into the full capabilities of Ultra to provide reasoning, follow instructions, generate content and enable creative collaboration. Google said it can act as a personalized tutor or aid users in planning business strategies.
The premium subscription will cost $9.99 per month and bundle top AI features from Google into a single offering. This includes extras like expanded cloud storage.
Gemini Coming To Key Google Products
In addition to the chatbot, Gemini models will power AI capabilities in Google’s most popular products.
The company said Gemini is coming soon to Workspace, its suite of collaboration and productivity apps. Key features like ‘Smart Compose’ in Gmail use Gemini to help users write faster. Later this year, Google One subscribers will also get access to Gemini directly within Workspace apps.
For Google Cloud customers, Gemini will help developers code faster, improve productivity, and strengthen security through AI.
Google plans to share more details next week about what Gemini will enable for developers and Cloud customers.
Researchers have uncovered innovative prompting methods in a study of 26 tactics, such as offering tips, which significantly enhance responses to align more closely with user intentions.
A research paper titled, Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4,” details an in-depth exploration into optimizing Large Language Model prompts. The researchers, from the Mohamed bin Zayed University of AI, tested 26 prompting strategies then measured the accuracy of the results. All of the researched strategies worked at least okay but some of them improved the output by more than 40%.
OpenAI recommends multiple tactics in order to obtain the best performance from ChatGPT. But there’s nothing in the official documentation that matches any of the 26 tactics that the researchers tested, including being polite and offering a tip.
Does Being Polite To ChatGPT Get Better Responses?
Are your prompts polite? Do you say please and thank you? Anecdotal evidence points to a surprising number of people who ask ChatGPT with a “please” and a “thank you” after they receive an answer.
Some people do it out of habit. Others believe that the language model is influenced by user interaction style that is reflected in the output.
In early December 2023 someone on X (formerly Twitter) who posts as thebes (@voooooogel) did an informal and unscientific test and discovered that ChatGPT provides longer responses when the prompt includes an offer of a tip.
The test was in no way scientific but it was amusing thread that inspired a lively discussion.
The tweet included a graph documenting the results:
Saying no tip is offered resulted in 2% shorter response than the baseline.
Offering a $20 tip provided a 6% improvement in output length.
Offering a $200 tip provided 11% longer output.
so a couple days ago i made a shitpost about tipping chatgpt, and someone replied “huh would this actually help performance”
The researchers had a legitimate reason to investigate whether politeness or offering a tip made a difference. One of the tests was to avoid politeness and simply be neutral without saying words like “please” or “thank you” which resulted in an improvement to ChatGPT responses. That method of prompting yielded a boost of 5%.
Methodology
The researchers used a variety of language models, not just GPT-4. The prompts tested included with and without the principled prompts.
Large Language Models Used For Testing
Multiple large language models were tested to see if differences in size and training data affected the test results.
The language models used in the tests came in three size ranges:
small-scale (7B models)
medium-scale (13B)
large-scale (70B, GPT-3.5/4)
The following LLMs were used as base models for testing:
LLaMA-1-{7, 13}
LLaMA-2-{7, 13},
Off-the-shelf LLaMA-2-70B-chat,
GPT-3.5 (ChatGPT)
GPT-4
26 Types Of Prompts: Principled Prompts
The researchers created 26 kinds of prompts that they called “principled prompts” that were to be tested with a benchmark called Atlas. They used a single response for each question, comparing responses to 20 human-selected questions with and without principled prompts.
The principled prompts were arranged into five categories:
Prompt Structure and Clarity
Specificity and Information
User Interaction and Engagement
Content and Language Style
Complex Tasks and Coding Prompts
These are examples of the principles categorized as Content and Language Style:
“Principle 1 No need to be polite with LLM so there is no need to add phrases like “please”, “if you don’t mind”, “thank you”, “I would like to”, etc., and get straight to the point.
Principle 6 Add “I’m going to tip $xxx for a better solution!
Principle 9 Incorporate the following phrases: “Your task is” and “You MUST.”
Principle 10 Incorporate the following phrases: “You will be penalized.”
Principle 11 Use the phrase “Answer a question given in natural language form” in your prompts.
Principle 16 Assign a role to the language model.
Principle 18 Repeat a specific word or phrase multiple times within a prompt.”
All Prompts Used Best Practices
Lastly, the design of the prompts used the following six best practices:
Conciseness and Clarity: Generally, overly verbose or ambiguous prompts can confuse the model or lead to irrelevant responses. Thus, the prompt should be concise…
Contextual Relevance: The prompt must provide relevant context that helps the model understand the background and domain of the task.
Task Alignment: The prompt should be closely aligned with the task at hand.
Example Demonstrations: For more complex tasks, including examples within the prompt can demonstrate the desired format or type of response.
Avoiding Bias: Prompts should be designed to minimize the activation of biases inherent in the model due to its training data. Use neutral language…
Incremental Prompting: For tasks that require a sequence of steps, prompts can be structured to guide the model through the process incrementally.
Results Of Tests
Here’s an example of a test using Principle 7, which uses a tactic called few-shot prompting, which is prompt that includes examples.
A regular prompt without the use of one of the principles got the answer wrong with GPT-4:
However the same question done with a principled prompt (few-shot prompting/examples) elicited a better response:
Larger Language Models Displayed More Improvements
An interesting result of the test is that the larger the language model the greater the improvement in correctness.
The following screenshot shows the degree of improvement of each language model for each principle.
Highlighted in the screenshot is Principle 1 which emphasizes being direct, neutral and not saying words like please or thank you, which resulted in an improvement of 5%.
Also highlighted are the results for Principle 6 which is the prompt that includes an offering of a tip, which surprisingly resulted in an improvement of 45%.
The description of the neutral Principle 1 prompt:
“If you prefer more concise answers, no need to be polite with LLM so there is no need to add phrases like “please”, “if you don’t mind”, “thank you”, “I would like to”, etc., and get straight to the point.”
The description of the Principle 6 prompt:
“Add “I’m going to tip $xxx for a better solution!””
Conclusions And Future Directions
The researchers concluded that the 26 principles were largely successful in helping the LLM to focus on the important parts of the input context, which in turn improved the quality of the responses. They referred to the effect as reformulating contexts:
Our empirical results demonstrate that this strategy can effectively reformulate contexts that might otherwise compromise the quality of the output, thereby enhancing the relevance, brevity, and objectivity of the responses.”
Future areas of research noted in the study is to see if the foundation models could be improved by fine-tuning the language models with the principled prompts to improve the generated responses.
Google is looking into concerning reports that some businesses are sabotaging competitors’ Google Business Pages (GBPs) by creating fake Local Service Ads (LSAs) linked to their profiles.
Google Ads Liaison Ginny Marvin acknowledged the tactic on X (formerly Twitter) after Ben Fisher alerted her to a thread on Google’s support forum detailing the destructive scheme.
“This is a brutal new tactic that competitors are doing on LSA,” Fisher explained in his message to Marvin. “A competitor makes a new LSA for a competitor, and because the link to GBP is automatic, the system will essentially nuke the competitor out of existence.”
Unfair Play In Online Advertising
A Google support forum thread outlines how one business’s long-standing LSA account suddenly stopped generating leads and referrals after a decade of operation.
When the owner contacted Google’s support team, he was informed that a second unknown LSA account had been created and linked to the company’s Google Business Page, effectively hiding the legitimate ads from public view.
However, the owner maintained that he had only ever created one LSA account and had no knowledge of the mysterious second account.
Google advised completely deleting his original account and reviews and starting over with a brand new Business Page.
This suggestion was, understandably, met with dismay by the business owner, who questioned the fairness of the situation.
“Why would an unknown entity get to force us off of our own Google listing?” the frustrated owner wrote. “There must be a way where we can re-verify our account and eliminate any LSA’s we don’t approve of.”
Google’s Response To The Emerging Threat
Acknowledging the gravity of the situation, Marvin responded to Fisher’s alert, stating: “Thanks for flagging, I’m sharing with the team.”
This acknowledgment indicates that Google is now aware of the exploit and suggests that steps may be taken to address this loophole in the LSA system.
Implications For The Search Marketing Community
This tactic of faking competitor ads to sink Google Business Pages is an alarming new potential threat.
Google linking LSAs automatically provides an opening for abuse, and time will tell if the company can find a solution.
As the search marketing community waits for Google to address the issue, this incident is an eye-opening reminder of extremes some will take to hinder competition.
For now, businesses are advised to carefully monitor their Google Business Profiles and swiftly report any abnormalities.
YouTube has introduced a new feature that allows podcast creators to upload their podcast RSS feeds directly to YouTube Studio.
The direct RSS feed integration enables audio-focused podcasters to share their content on YouTube more easily without manually uploading individual episodes.
Simplifying Podcast Distribution
RSS, which stands for Really Simple Syndication, is a technology commonly used by podcasters to distribute audio content across different platforms.
YouTube’s new feature allows podcast episodes uploaded via RSS feeds to be automatically converted into static image videos on the platform.
Automatic Video Creation
When a new episode is added to a podcaster’s RSS feed, YouTube will automatically generate a static image video for that episode and upload it directly to the user’s channel.
This automated process eliminates the need for podcasters to create and upload videos for each episode manually.
How to Use the New Feature
For digital marketers and content creators looking to utilize this new feature, the process is straightforward:
Click the ‘Create’ button in the upper right corner to navigate to YouTube Studio.
Select ‘Submit RSS feed’ and follow the on-screen instructions.
For those who already have podcasts on YouTube, go to the ‘Content’ tab, find the podcast you wish to edit, click the pencil icon under ‘RSS settings,’ and then click ‘Connect to RSS feed.’
Benefits For Podcasters
This integration makes YouTube a more centralized home for podcast creators’ content. By leveraging their existing RSS feeds, they can quickly get their show onto YouTube without manually uploading and managing every episode.
The automated process also saves podcasters time and effort, helping them reach YouTube’s large audience. Expanding distribution to YouTube can help podcasts gain more listeners, views, and subscribers.
For digital media creators, leveraging multiple platforms remains vital for growing an audience. This new tool makes it simpler for podcast producers to tap into YouTube’s massive popularity.
While the RSS integration is currently in beta testing, YouTube aims to refine the feature based on user feedback. The company hopes it will provide a valuable new podcast hosting and distribution option.
Interaction to Next Paint (INP) is a new Core Web Vital metric focused on responsiveness that is scheduled to replace First Input Delay on March 4, 2023. Optimizing for INP is easier with the right tools to monitor and track it.
What Is Interaction to Next Paint (INP)?
INP measures the amount of time a site visitor waits after doing something like clicking a button or typing and the time it takes for the website to provide a visual feedback. INP is a metric showing the amount of time visual feedback is blocked after a user interaction.
The idea behind this metric is that an unresponsive webpage is a poor user experience. For example, adding a product into a shopping cart should immediately produce a visual feedback response showing the site visitor that the interaction was responded to. In that specific example, INP is not measuring the time it takes to add a product to the shopping cart, it only measures how long the visual feedback of that action is blocked.
Lower INP scores mean fast response times, which is the goal. Good INP scores are those under 200 milliseconds.
JavaScript and CSS are the primary targets to look at for INP optimization.
INP measures the following user interactions:
Mouse clicks
Taps on devices that have a touchscreen.
Pressing on a keyboard (both physical and virtual keyboards)
INP Measurement And Optimization Tools
There is no tool that can singlehandedly fix INP problems because the problems originate in the JavaScript and CSS used by themes, plugins, features and extra functionalities used on a webpage.
For example, installing and using an image carousel or animation effects will load extra JavaScript and CSS code which can negatively impact INP scores. Minifying JavaScript and CSS isn’t always the solution, which means a key step for optimizing for Interaction To Next Paint is to audit the code and identify anything that doesn’t help the webpage and the user achieve their purpose.
Thus, the key functionality of an INP optimization tool is to identify what’s blocking or delaying the visual feedback from a user interaction.
Site Kit by Google, with over 4 million WordPress installations, is one of the most powerful ways to integrate Google search data into a WordPress dashboard for easy access while inside WordPress.
This tool displays PageSpeed Insights and Search Console data, including actionable advice on what to improve.
2. DebugBear Interaction to Next Paint Tool (Free And Paid Versions) Free INP Debugger
DebugBear is a popular page speed monitoring tool that has a pro version that offers scheduled tests, event notifications, performance tests that preview impacts before live deployment another benefits.
But it also offers free tools like this excellent Interaction to Next Paint tool that will crawl a webpage and diagnose issues and provide actionable tips for fixing Interaction To Next Paint issues.
This Chrome extension offers core web vitals metrics, including INP. A useful feature of this extension is the unique heads up display (HUD) that overlays the webpage which can be helpful when developing or making changes to a webpage.
The Treo site speed tool offers incredibly fast Page Speed tools with an attractive user interface that’s easy to read and understand.
5. Chrome Web Vitals Library There is an advanced tool for measuring core web vitals metrics from actual site visitors that can be deployed by individual publishers on their own web servers. This tool can enable publishers to see real core web vitals scores that are useful for troubleshooting actual webpage issues. An overview and explainer is available here.
Get Ready For INP
While INP might not be a direct ranking factor, INP is still a useful metric for creating the fastest page experience because site speed is known to improve sales, clicks and ad views and it aligns with signals that Google uses for ranking.
Meta announced it will start labeling images created by AI across Facebook, Instagram, and Threads in the coming months.
The move comes as AI image generation tools grow in popularity, making distinguishing human-made and AI-created content harder.
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” states Nick Clegg, President of Global Affairs at Meta.
Upcoming Features & Expectations
Meta plans to start labeling AI-generated images in multiple languages on its platforms over the next few months.
This move comes during major elections worldwide, when knowing the source of content is especially important.
Meta will employ various techniques to differentiate AI-generated images from other images. These include visible markers, invisible watermarks, and metadata embedded in the image files.
Additionally, Meta is implementing new policies requiring users to disclose when media is generated by artificial intelligence, with consequences for failing to comply.
Meta’s methods follow best practices recommended by the Partnership on AI (PAI), an industry group focused on responsible AI development.
Looking Ahead
Over the next 12 months, Meta will closely monitor user engagement with labeled AI content. These insights will shape the platform’s long-term strategy.
Currently, Meta manually labels images created through its internal AI image generator with disclosures like “Imagined by AI.” Now, the company will leverage its detection tools to label AI content from other providers like Google, Microsoft, Adobe, and leading AI art platforms.
In the interim, Meta advises users to critically evaluate accounts sharing images and watch for visual inconsistencies that may reveal computer generation.
Key Takeaways
Here are some key takeaways for businesses and social media marketers based on Meta’s announcement:
Authenticity and transparency will be crucial as AI image creation goes mainstream. Businesses should consider proactive disclosures if using AI-generated content in marketing.
Two camps may emerge – those who embrace AI creations and those who value “human-made” content. Brands should understand what their audience prefers.
With proper labeling, synthetic content may not negatively impact trust. However, marketers should closely monitor user sentiment surrounding AI usage.
AI could become a powerful marketing asset for content creation at scale, but ethical AI development is advised. Rushing to use immature technologies could backfire.
Interest in synthetic media detection tools, digital watermarking, and metadata standards will likely surge. Savvy marketers should stay on top of these technologies.
Meta’s approach hints at a measured transition, but swift change is likely. Marketers can stay ahead by preparing flexible creative and compliance strategies for synthetic content.
Featured Image: Screenshot from about.fb.com/news/, February 2024.
LinkedIn has introduced new AI-powered features to improve the networking experience on its platform.
These updates come at a time when many professionals are re-evaluating their careers and looking to expand their networks.
The new features use AI to streamline processes like making connections, searching for jobs, and sharing content.
A Competitive Edge in the Job Market
LinkedIn cites a recent survey that found 85% of professionals are considering changing jobs in 2024.
LinkedIn acknowledges that building and maintaining a professional network requires a substantial time commitment, with nearly a quarter of people surveyed reporting they spend 6-10 hours per week on networking activities.
AI to the Rescue
LinkedIn’s latest features use AI to make networking more efficient.
This includes a redesigned Network Tab with two sections, “Grow” and “Catch Up,” aimed at helping users expand their network and stay updated on existing connections.
The “Grow” Tab
The “Grow” tab utilizes AI algorithms to help users manage connections and find new relevant contacts. It provides personalized suggestions through the “People You May Know” feature.
The “Catch Up” Tab
The “Catch Up” tab prompts users to reconnect with their network based on updates like job changes, work anniversaries, new hires, or birthdays. This aims to encourage more meaningful interactions between users.
Crafting The Perfect First Message
LinkedIn has introduced a Premium feature that helps users compose introductory messages when initiating conversations to address the “blank page problem” many face when starting conversations on the platform.
The tool provides draft messages tailored to both parties by incorporating information from their profiles, which users can customize to reflect their voice and goals for the conversation.
The Power of Connection
With over 5 billion connections made on the platform in 2023, LinkedIn expects engagement and interactions to grow in 2024.
As professional relationships can heavily influence career advancement, LinkedIn’s newest AI-powered features aim to make navigating the job market easier and more efficient for users looking to get ahead in competitive industries.
A recent survey by the Pew Research Center found that YouTube and Facebook remain the most widely used social media platforms among adults in the United States. At the same time, the survey showed substantial increases in the number of TikTok users.
According to the survey, YouTube and Facebook are the most widely used online platforms among U.S. adults, with 83% and 68% usage rates, respectively.
Approximately 50% of U.S. adults use Instagram. Other platforms like Pinterest, TikTok, LinkedIn, WhatsApp, and Snapchat have usage rates ranging from 27% to 35% of U.S. adults.
This year’s survey was the first to ask about BeReal, a newer photo-sharing app with a usage rate of just 3% among U.S. adults.
The percentage of U.S. adults who use TikTok has increased from 21% in 2021 to 33% currently. This growth rate for TikTok exceeds the more modest or stagnant growth rates observed for other social media platforms over the same period.
Age Disparities In Social Media Use
The survey results reveal differences in social media platform usage across age groups. Adults under 30 were likelier to use Instagram, Snapchat, and TikTok than older adults.
For example, 78% of 18-29-year-olds reported using Instagram, much higher than the 15% of adults 65 and over. Snapchat and TikTok followed similar usage patterns, with younger adults showing higher rates.
In contrast, YouTube and Facebook had more consistent usage across age groups, though younger adults still exhibited higher engagement on these platforms than older adults.
Demographic Differences In Social Media Use
The Pew Research Center study revealed demographic differences in social media platform usage.:
Instagram: More popular among Hispanic and Asian adults, women, and those with some college education.
TikTok: Higher usage rates among Hispanic adults and women.
LinkedIn: Most popular among Americans with higher educational attainment.
Twitter (now “X”): Usage correlates with higher household incomes.
Pinterest: Significantly more popular among women.
WhatsApp: More frequently used by Hispanic and Asian adults.
Takeaways For Social Media Marketers
The Pew Research Center’s latest findings on social media usage in the United States provide several valuable insights that social media marketers should consider when developing marketing strategies:
YouTube has broad appeal across all age groups, making it an essential platform for video campaigns targeting a broad audience.
Facebook maintains an extensive user base and provides capabilities for targeted advertising and reaching diverse demographics.
TikTok is experiencing explosive growth, especially among younger users, presenting opportunities for brands to leverage its creative and viral nature.
Instagram is highly popular with youth and minority groups like Hispanics and Asians, making it suitable for campaigns targeting these demographics.
LinkedIn caters to educated professionals, making it ideal for B2B marketing, thought leadership, and employer branding.
Short-form video content is rising in popularity, as seen with TikTok, so bite-sized engaging videos can capture limited user attention spans.
Though smaller in scale, niche platforms like Pinterest, Snapchat, and WhatsApp enable targeted niche marketing opportunities.
Emerging platforms such as BeReal could provide first-mover advantages as they expand.
Snapchat and TikTok are essential for engaging users under 30 years old.
Cross-platform campaigns allow greater reach and unified messaging.
Platforms popular with specific audiences like Hispanics, Asians, and higher-income households, such as WhatsApp and Twitter, should be considered when marketing to those groups.
Marketers can apply these insights to craft platform-specific strategies tailored to user demographics and behaviors. A nuanced understanding of the latest trends can inform more effective social media engagement and returns on marketing investment.
Methodology
The Pew Research Center surveyed 5,733 U.S. adults between May 19 and September 5, 2023. Ipsos carried out the survey using both online and mail methodologies to obtain a demographically representative sample of the U.S. adult population. The results were weighted by gender, race and ethnicity, education, and other relevant factors to align with U.S. Census benchmarks.
The survey represented a transition from traditional phone polling to a combination of web and mail. The Pew Research Center has provided details on the survey methodology and the potential impact of this change for those interested in better understanding the data collection process.