AI
How to Turn Off Meta AI on WhatsApp, Facebook, and More: A Comprehensive Guide

This article is designed for users who want to understand the nature of Meta AI and take practical steps to control its functionality. Whether you’re concerned about privacy, annoyed by unsolicited suggestions, or simply prefer a clutter-free app interface, the following sections will equip you with the knowledge and tools to navigate this new feature. We’ll cover the technical reasons behind its permanence, the specific actions you can take to mute or disable aspects of it, and additional strategies to safeguard your data, ensuring you can use Meta’s apps on your terms.
Understanding Meta AI: What It Is and Why It Can’t Be Fully Disabled
Meta AI is an intelligent assistant developed by Meta, leveraging the Llama 3.2 model to enhance user interaction across its ecosystem. This AI can perform a variety of tasks, such as conducting web searches, generating text responses, creating images, and even assisting with translations or scheduling, all accessible within your chats or through a dedicated button. On WhatsApp, the blue circle appears above the new chat icon, while on Facebook and Instagram, it’s integrated into the search bar, and on Messenger, it pops up in conversations. Its design aims to make these apps more interactive and helpful, aligning with Meta’s broader goal of embedding AI into everyday digital life.
However, the inability to fully disable Meta AI stems from its deep integration into the core architecture of these applications. Unlike optional features that can be toggled off, Meta AI is a foundational component, meaning its icon and underlying data infrastructure remain active even if you limit its use. This design choice is driven by Meta’s strategy to collect user interaction data—such as chat patterns, preferences, and engagement levels—to refine its algorithms and offer personalized experiences. While this benefits Meta’s advertising and development goals, it raises privacy concerns, as the AI may process metadata or conversation snippets, even if personal chats are end-to-end encrypted. The lack of a universal opt-out option reflects Meta’s intent to keep the feature accessible to all users, though it has provided partial controls to address user feedback.
The blue circle or button serves as a constant reminder of Meta AI’s presence, and attempts to remove it through app settings or third-party tools have proven ineffective. This persistence is particularly noticeable in WhatsApp, where the AI is preloaded into chats, and in Facebook, where it occasionally summarizes comments on posts. Users have expressed frustration over the lack of a simple “turn off” switch, especially given concerns about data privacy and the potential for profiling based on AI interactions. While Meta has not publicly detailed the full extent of data collection, the feature’s mandatory nature suggests it’s a long-term investment, likely to evolve with future updates rather than be removed.
Step-by-Step Guide to Limit Meta AI Across Platforms
While you can’t completely eliminate Meta AI, you can significantly reduce its visibility and intrusiveness by muting the chatbot, disabling specific features like comment summaries, and minimizing interactions. Below is a detailed, step-by-step guide to help you take control on each platform:
1. Muting the Meta AI Chatbot
Muting the chatbot is the most effective way to stop Meta AI from sending notifications or proactively engaging with you. Here’s how to do it on each app:
- On Facebook:
- Open the Facebook mobile app on your device.
- Tap the search icon (magnifying glass) at the top of the screen.
- Locate and tap the Meta AI icon, which appears as a blue circle with the AI logo.
- Tap the info button (an “i” in a circle) in the top right corner of the chat.
- Select “Mute” from the options, then choose “Until I change it” to permanently silence notifications. This step helps you turn off Meta AI chatbot on Facebook without removing its presence.
- Confirm your selection, and the chatbot will no longer interrupt you with suggestions.
- On Instagram:
- Launch the Instagram app and tap the search icon at the bottom.
- Find the Meta AI icon in the search results or suggestions and tap it.
- Tap the info button in the top right corner of the chat interface.
- Choose “Mute” and select “Until I change it” to mute Meta AI on Instagram. This prevents the AI from popping up unsolicited.
- Save your preference, and enjoy a quieter experience.
- On WhatsApp:
- Open WhatsApp and start a new chat with Meta AI by typing @MetaAI or selecting it from the chat list.
- Return to the main chats tab and swipe left on the Meta AI chat thread.
- Tap “More” (three dots), then select “Mute.”
- Choose “Always” to disable Meta AI on WhatsApp and stop notifications permanently.
- This action ensures the AI remains silent unless you initiate contact.
- On Messenger:
- Open the Messenger app and navigate to a chat with Meta AI.
- Tap the chat name at the top of the screen to open the menu.
- Select “Mute” from the options, then choose a duration (e.g., 1 hour) or “Until I turn it back on” to remove Meta AI from Messenger.
- Confirm, and the chatbot will stay quiet until you reactivate it.
2. Disabling AI-Generated Comment Summaries on Facebook
Facebook’s AI feature automatically summarizes comments on your posts, which some users find unnecessary. Here’s how to turn it off:
- Open the Facebook app and tap your profile icon (bottom right) or the menu (three lines).
- Scroll to “Settings & Privacy” and tap “Settings.”
- Navigate to “Audience and Visibility” and select “Posts.”
- Find the option labeled “Comment Summaries on your posts” and toggle it off to disable AI-generated comment summaries on Facebook.
- This change applies only to your posts, giving you control over your content’s presentation.
3. Additional Strategies to Minimize Meta AI’s Impact
Beyond muting and disabling, you can take proactive steps to limit Meta AI’s influence:
- Avoid Interaction: Refrain from tapping the blue circle, using @MetaAI mentions, or engaging with the chatbot. This reduces the data it collects about your preferences.
- Reset AI Memory: In any Meta AI chat, type /reset-ai and confirm to clear the AI’s memory of your past interactions. This doesn’t deactivate the feature but limits its personalization.
- Adjust Notification Settings: Go to each app’s notification settings and turn off alerts for Meta AI-specific updates to mute Meta AI notifications permanently.
- Use App Alternatives: For privacy-conscious users, consider third-party messaging apps that don’t integrate AI assistants, though this requires switching platforms.
Privacy Concerns and Long-Term Implications
Muting and disabling these features significantly reduces Meta AI’s intrusiveness, but it doesn’t eliminate its presence. The AI continues to run in the background, collecting metadata like usage patterns and app navigation data to improve its algorithms. This raises privacy concerns, particularly for users who share sensitive information or prefer minimal data tracking. While personal chats remain end-to-end encrypted, interactions with Meta AI are not, meaning any queries or content generated could be logged and analyzed, potentially shared with third parties under Meta’s data policy.
The lack of a full disable option suggests Meta views AI as a core future component, likely to expand with features like real-time translation or enhanced content moderation. This permanence could lead to regulatory scrutiny, especially in regions with strict privacy laws like the EU, where users might demand more control. For now, the provided steps offer the best workaround, though they require ongoing vigilance to avoid unintended engagement.
Broader Context and User Experience
This integration reflects a broader trend among tech giants to embed AI into social platforms, competing with tools like Google’s Gemini or OpenAI’s ChatGPT. Meta’s approach prioritizes accessibility, making AI available to all users rather than a paid tier, but it sacrifices user autonomy. Some appreciate the convenience—e.g., quick answers during chats—while others find it invasive, especially on WhatsApp, where privacy has historically been a selling point. The blue circle’s visibility has become a point of contention, with users on social media calling for a toggle-off option, a demand Meta has yet to address.
As Meta refines its AI, future updates might introduce more customization, such as opt-in settings or enhanced privacy controls. Until then, the steps outlined here provide a practical solution. This guide aims to empower users, balancing the benefits of AI assistance with the need for personal control. Have you tried muting Meta AI or adjusting these settings? Share your experience and tips in the comments, and explore more tech solutions at briskfeeds.com.
AI
OpenAI Launches Image Generation API, Bringing DALL-E Powers to Developers

OpenAI has released its advanced image generation technology as an API, allowing developers to integrate the powerful AI image creation capabilities directly into their applications. This move significantly expands access to the technology previously available primarily through ChatGPT and other OpenAI-controlled interfaces.
The newly released API gives developers programmatic access to the same image generation model that powers ChatGPT’s visual creation tools. Companies can now incorporate sophisticated AI image generation into their own applications without requiring users to interact with OpenAI’s platforms directly.
“We’re making our image generation models available via API, allowing developers to easily integrate image generation into their applications,” OpenAI stated in its announcement. The company emphasized that the API has been designed with both performance and responsibility in mind, implementing safety systems similar to those used in their consumer-facing products.
The image generation API supports a wide range of capabilities, including creating images from text descriptions, editing existing images with text instructions, and generating variations of uploaded images. Developers can specify parameters such as image size, style preferences, and quality levels to customize outputs for their specific use cases.
Major software companies have already begun implementing the technology. Design and creative software leaders like Adobe and Figma are among the first partners to integrate the API into their products, enabling users to generate images directly within their existing workflows rather than switching between multiple applications.
The API operates on a usage-based pricing model, with costs calculated based on factors including image resolution, generation complexity, and volume. Enterprise customers with specialized needs can access custom pricing plans and dedicated support channels, while smaller developers can get started with standard plans.
Security and content moderation remain central to the implementation. OpenAI has incorporated safety mechanisms to prevent the generation of harmful, illegal, or deceptive content. The system includes filters for violent, sexual, and hateful imagery, as well as protections against creating deepfakes of real individuals without proper authorization.
“This represents a significant step in making advanced AI capabilities more accessible to developers of all sizes,” said technology analyst Maria Rodriguez. “Previously, building this level of image generation required massive resources and expertise that most companies simply didn’t have.”
Industry experts note that the API’s release will likely accelerate the integration of AI-generated imagery across a wide range of applications, from e-commerce product visualization to educational tools and creative software. The programmable nature of the API allows for more customized and contextual image generation compared to using standalone tools.
For enterprises looking to incorporate image generation into their products, the API offers advantages including reduced latency, customization options, and the ability to maintain users within their own ecosystems rather than redirecting them to external AI tools.
The release comes amid growing competition in the AI image generation space, with competitors like Midjourney, Stable Diffusion, and Google’s image generation models all vying for developer and enterprise adoption. OpenAI’s strong brand recognition and the widespread familiarity with DALL-E through ChatGPT give it certain advantages, though pricing and performance factors will influence adoption rates.
Developers interested in implementing the image generation API can access documentation and begin integration through OpenAI’s developer portal. The company provides code examples in popular programming languages and comprehensive guides for common use cases to streamline the implementation process.
OpenAI emphasizes that all API users must adhere to their usage policies, which prohibit applications that could cause harm or violate the rights of others. The company maintains the ability to monitor API usage and can suspend access for applications that violate these terms.
As AI-generated imagery becomes increasingly mainstream, ethical considerations around disclosure and transparency continue to evolve. Many platforms require or encourage disclosure when AI-generated images are used commercially, and OpenAI recommends that developers implement similar transparency measures in their applications.
The API release represents OpenAI’s continued strategy of first developing advanced AI capabilities for direct consumer use before making them available as programmable services for the broader developer ecosystem. This approach allows the company to refine its models and safety systems before wider deployment while maintaining some level of oversight regarding how its technology is implemented.
AI
Columbia Student Suspended for AI Cheating Tool Secures $5.3M in Funding

Former Columbia University student Jordan Chong has transformed academic punishment into entrepreneurial opportunity by securing $5.3 million in seed funding for his controversial AI startup. The 21-year-old, who was suspended from the prestigious university for creating an AI interview cheating tool, has now founded Cluely, a company focused on developing AI tools for interview assistance.
“I got kicked out of Columbia for building an AI tool that helped me cheat on class interviews,” Chong stated in recent interviews. Rather than abandoning his project after facing serious academic consequences, the young entrepreneur refined his technology and attracted significant investor interest.
According to TechCrunch, the $5.3 million seed round was led by Founders Fund, with participation from several angel investors who recognized potential in Cluely’s approach to AI-assisted communication. This funding success comes during a challenging period for AI startups, with venture capital investments in the sector showing notable decline in recent months.
Cluely is out. cheat on everything. pic.twitter.com/EsRXQaCfUI
— Roy (@im_roy_lee) April 20, 2025
The Technology Behind Cluely
Cluely’s technology analyzes patterns in interview questions and generates contextually appropriate responses based on an extensive database of successful answers. The system can provide real-time suggestions during interviews, helping users respond more effectively to unexpected questions.
The application initially focused on academic settings but has expanded to cover job interviews and other professional assessments. Users can access Cluely’s suggestions through mobile applications and browser extensions designed to operate discreetly during interview situations.
“Our technology isn’t just about providing answers,” Chong explains. “It’s about augmenting human capabilities in situations where people often struggle to perform their best due to anxiety or limited preparation time.”
Reports from Digital Watch indicate that the tool works by analyzing patterns in interview questions and generating contextually appropriate responses. Users can access these suggestions through various interfaces, enabling what some consider an unfair advantage in assessment situations.
Ethical Concerns and Academic Integrity
The emergence and funding of Cluely has sparked intense debate within educational and professional communities. Academic institutions, including Columbia University, have expressed concerns about tools that potentially undermine the integrity of assessment processes.
“When we evaluate students or job candidates, we’re trying to gauge their actual knowledge and abilities,” explained Dr. Michael Chen, Dean of Student Affairs at a prominent East Coast university. “Tools that artificially enhance performance risk making these assessments meaningless.”
Maeil Business Newspaper reports that many universities are already adapting their interview processes to counter AI-assisted cheating. Some have implemented stricter monitoring protocols, while others are moving toward assessment methods that are more difficult to circumvent with AI assistance.
Educational technology experts suggest that Cluely represents a new frontier in the ongoing balance between assessment integrity and technological advancement. “We’ve dealt with calculators, internet access, and basic AI tools,” noted education technology researcher Dr. Lisa Rodriguez. “But real-time interview assistance takes these challenges to a completely different level.”
Growing Market Despite Controversy
Despite ethical concerns, market analysts predict significant growth in AI-assisted communication tools. The global market for such technologies is projected to reach $15 billion by 2027, according to recent industry reports.
Cluely is positioning itself at the forefront of this emerging sector. The company plans to use its newly secured funding to expand its team, enhance its core technology, and develop new features targeting various interview and assessment scenarios.
“We’re currently focused on interview preparation and assistance,” Chong stated, “but our vision extends to supporting all forms of high-stakes communication, from negotiation to public speaking and beyond.”
FirstPost highlights that while the company markets its product as an “AI communication assistant,” many educators view it as explicitly designed for cheating. This perception stems from Chong’s own admission about the tool’s origins and its “cheat on everything” tagline that has appeared in some marketing materials.
Regulatory Landscape and Future Challenges
As AI communication tools like Cluely gain traction, they face an evolving regulatory landscape. Several states are considering legislation that would require disclosure when AI assistance is used in academic or professional settings.
“We anticipate increased regulatory attention as our technology becomes more widespread,” acknowledged Chong. “We’re committed to working with regulators to find the right balance between innovation and protecting the integrity of assessment systems.”
Legal experts suggest that the coming years will see significant development in how AI-assisted communication tools are regulated, particularly in educational and employment contexts. Some predict requirements for disclosure when such tools are used, while others anticipate technical countermeasures to detect AI assistance.
Adapting Assessment Methods for the AI Era
The rise of tools like Cluely is forcing educational institutions and employers to reconsider traditional assessment methods. Many are already shifting toward evaluation approaches that are more difficult to game with AI assistance.
“We’re seeing increased interest in project-based assessments, collaborative problem-solving exercises, and demonstrations of skills in controlled environments,” explained Dr. Jennifer Wise, an expert in educational assessment. “The goal is to evaluate capabilities in ways that AI can’t easily enhance.”
Some forward-thinking organizations have embraced AI as part of the assessment process, explicitly allowing candidates to use AI tools while focusing evaluation on how effectively they leverage these resources.
The Future of Human-AI Collaboration
Beyond the immediate context of interviews and assessments, Cluely represents a broader trend toward AI-augmented human performance. This trend raises fundamental questions about how we define and value human capabilities in an era of increasingly sophisticated AI assistance.
For Chong and Cluely, these philosophical questions take a back seat to the immediate business opportunity. With $5.3 million in fresh funding, the company is poised for rapid growth despite its controversial origins.
As TechCrunch notes, Cluely’s success highlights the complex relationship between academic integrity and technological innovation. While educational institutions grapple with how to maintain assessment validity, entrepreneurs like Chong are capitalizing on the demand for tools that enhance human performance—regardless of the ethical implications.
AI
Instagram’s AI Teen Detection: The 2025 Surprise You’ll Want to See!

Instagram has rolled out a groundbreaking use of artificial intelligence (AI) to detect and protect teen users, a move announced in 2025 that’s catching global attention. This “adult classifier” system, developed by Meta, uses AI to identify users under 18 based on profile data, interactions, and even birthday posts, automatically enforcing stricter safety settings. Launched as part of a broader effort to safeguard younger audiences, it promises 98% accuracy in age estimation, but it’s sparking debates about privacy and parental control.
This isn’t just a tech update—it’s a worldwide story with implications for social media safety. With AI flagging teens, how will it balance protection and personal freedom? Let’s dive into the details.
The initiative addresses growing concerns about teen safety on social media, building on Meta’s prior investments in AI technology. The “adult classifier” analyzes signals like follower lists, content engagement, and friend messages—such as “happy 16th birthday”—to estimate age, as noted in a tech overview by Meta. This system, first tested in 2022, now enforces settings like private accounts and restricted messaging for detected teens, responding to pressure from parents and regulators worldwide.
The AI operates by processing vast datasets from user activity. It cross-references profile details with behavioral patterns, achieving a 98% accuracy rate in distinguishing users under 25, according to a detailed report on age detection. When it identifies a teen, Instagram applies safeguards: accounts default to private, adults can’t message them unless connected, and content filters block harmful material. This real-time adjustment aims to protect millions of users, but the scale of data involved raises questions.
Experts highlight the benefits. The 98% accuracy could shield teens from predators and mental health risks, a concern backed by an AP investigation into teen safety. For parents, it offers peace of mind, with options to monitor settings. Businesses might see safer platforms boost user trust, potentially increasing ad revenue. Yet, the system isn’t perfect—errors could misclassify adults as teens, limiting their access.
Risks are significant. AI relies on data, and biases in training sets—perhaps from uneven global representation—could lead to mistakes. Privacy advocates worry about the collection of personal data, including video selfies for verification, a method detailed in a Fast Company article on AI testing. Users can appeal, but the lack of transparency on data use is a gap. Cultural differences, like varying age norms, might also confuse the algorithm.
Public reaction adds a layer. On X, teens express frustration over lost control, with one posting, “AI deciding my account settings feels invasive.” Parents, however, praise the safety boost, calling it “long overdue.” This split, missing from official statements, could shape future tweaks, perhaps leading to opt-in options or clearer policies.
Long-term impact is another gap. Will AI detection hold up over years as teen behavior evolves? How will it handle new threats, like deepfakes or cyberbullying spikes? Meta plans a year-long review, but these questions linger. The system’s success could influence other platforms, like TikTok, to adopt similar tools.

Photo: Meta
As Instagram tests this AI teen detection, the world watches. Will it redefine social media safety or raise new privacy challenges? Early feedback will guide its future.
Share your views below. For more updates, visit briskfeeds.com.
-
AI3 months ago
DeepSeek AI Faces U.S. Government Ban Over National Security Concerns
-
Technology2 months ago
COVID-Like Bat Virus Found in China Raises Fears of Future Pandemics
-
Technology2 months ago
Pokémon Day 2025 Celebrations Set for February 27 With Special Pokémon Presents Livestream
-
AI2 months ago
Google Gemini Now Available on iPhone Lock Screens – A Game Changer for AI Assistants
-
Technology2 months ago
Bybit Suffers Record-Breaking $1.5 Billion Crypto Hack, Shaking Industry Confidence
-
Technology2 months ago
iPhone 17 Air and Pro Mockups Hint at Ultra-Thin Future, Per Leaked Apple Docs
-
Technology2 months ago
Apple Unveils New iPad Air with M3 Chip and Enhanced Magic Keyboard
-
Technology2 months ago
Yale Study Identifies Possible Links Between COVID Vaccine and Post-Vaccination Syndrome