Connect with us

Cybersecurity

California Man Pleads Guilty to Disney Hack Using Malicious AI Tool, Exposing 1.1TB of Data

Published

on

California man Disney hack with malicious AI tool

A 25-year-old California man, Ryan Mitchell Kramer, has pleaded guilty to hacking a Disney employee’s personal computer, resulting in the theft of 1.1 terabytes of confidential data, including sensitive financial and strategic information. The breach, which took place in 2024, involved a malicious AI image generation tool that Kramer distributed online, leading to one of the most significant data leaks in Disney’s history. This incident has raised serious concerns about the vulnerabilities in corporate cybersecurity and the growing misuse of AI technologies in the digital landscape.

Kramer, operating under the alias “NullBulge,” admitted to two felony charges: accessing a computer to obtain information and threatening to damage a protected computer, each carrying a maximum sentence of five years in federal prison. According to his plea agreement, Kramer posted a fraudulent AI art generation app on GitHub, which was actually embedded with malicious code. The app, named ComfyUI_LLMVISION, appeared to be an extension of the legitimate ComfyUI image generator but was designed to harvest sensitive data, such as passwords and payment card details, from any device that installed it. Between April and May 2024, a Disney employee unknowingly downloaded the app, granting Kramer access to both personal and work accounts, including a non-public Disney Slack channel containing over 44 million messages.

The stolen data was extensive, encompassing more than 18,800 spreadsheets and 13,000 PDFs that revealed Disney’s internal operations, such as Disney+ streaming revenue, Genie+ theme park pass sales, and pricing strategies. It also included personal information of employees and customers, such as bank details, medical records, and passport numbers of Disney Cruise Line workers. In July 2024, Kramer, posing as a member of a fictitious Russian hacktivist group called NullBulge, contacted the employee and threatened to leak the data. When the employee did not respond, Kramer followed through, releasing the information on multiple online platforms, an act that exposed Disney to significant reputational and financial risks in the entertainment industry.

The fallout from the breach was swift. On July 15, 2024, the Wall Street Journal reported the hack, prompting Disney to launch an internal investigation in collaboration with the FBI, which continues to probe the incident. The leaked data included conversations about Disney’s corporate website maintenance, software development, employee assessments, and even personal details like photos of employees’ dogs, stretching back to 2019. Disney issued a statement expressing relief that Kramer had been charged, emphasizing their commitment to working with law enforcement to combat cybercrime. However, the breach also led to unintended consequences for the affected employee, who was fired after a forensic analysis revealed unrelated inappropriate content on their work computer, adding a layer of controversy to the incident.

Kramer’s actions weren’t limited to Disney—he admitted to targeting two other victims who downloaded the malicious app, gaining unauthorized access to their computers as well. His use of AI as a hacking tool underscores a growing trend where cybercriminals exploit emerging technologies to bypass traditional security measures. The ComfyUI_LLMVISION app, for instance, was marketed as a legitimate tool for creating AI-generated art, a popular application in the creative industry. However, its underlying code enabled Kramer to steal sensitive data, highlighting the risks of downloading unverified software from platforms like GitHub or Hugging Face, which are often trusted by developers and users alike.

This incident has broader implications for corporate cybersecurity, especially as companies increasingly rely on AI-driven solutions for innovation. The Disney hack mirrors other recent breaches, such as the 2023 MOVEit supply chain attack, where vulnerabilities in third-party software led to massive data leaks. Experts like Dan Goodin, a security editor, argue that organizations must adopt stricter vetting processes for software downloads and enhance employee training to recognize phishing attempts or malicious apps. Disney’s decision to phase out Slack as a workplace communication tool, following the breach, reflects a growing awareness of these risks, with the company reportedly considering Microsoft Teams as a more secure alternative.

The legal consequences for Kramer are still unfolding, with his first court appearance scheduled in the coming weeks. Each of his charges carries a potential five-year sentence, though sentencing will depend on factors like his cooperation with authorities and the extent of the damage caused. For Disney, the breach has exposed vulnerabilities in its data security practices, prompting a reevaluation of how sensitive information is stored and accessed. The company’s spokesperson reiterated their commitment to safeguarding employee and customer data, but the incident has sparked a wider conversation about the balance between technological innovation and security in the corporate world.

The Disney hack serves as a stark reminder of the double-edged nature of AI—while it offers immense potential for creativity and efficiency, it can also be weaponized by malicious actors. As AI tools become more accessible, the risk of such attacks is likely to grow, challenging companies to stay ahead of evolving threats. For individuals and organizations alike, this incident underscores the importance of vigilance when adopting new technologies, particularly those sourced from open platforms. What are your thoughts on the misuse of AI in cyberattacks, and how can companies better protect themselves in this evolving landscape? Share your perspective in the comments—we’d love to hear your insights on this critical issue.

Liam Chen is a cybersecurity analyst with a background in information security and risk management. He has worked with various organizations to enhance their cyber defense strategies. At BriskFeeds, Liam reports on cyber threats, data protection, and the intersection of technology and security policies.

Cybersecurity

Billie Eilish AI Fakes Flood Internet: Singer Slams “Sickening” Doctored Met Gala Photos

Published

on

Billie Eilish AI Fakes: Singer Denounces Doctored Met Gala Pics

Billie Eilish AI fakes are the latest example of deepfake technology running rampant, as the Grammy-winning singer has publicly debunked viral images claiming to show her at the 2025 Met Gala. Eilish, who confirmed she did not attend the star-studded event, called the AI-generated pictures “sickening,” highlighting the growing crisis of celebrity image misuse and online misinformation in the USA and beyond.

LOS ANGELES, USA – The internet was abuzz with photos seemingly showing Billie Eilish at the 2025 Met Gala, but the singer herself has forcefully shut down the rumors, revealing the Billie Eilish AI fakes were entirely fabricated. In a recent social media statement, Eilish confirmed she was nowhere near the iconic fashion event and slammed the AI-generated images as “sickening.” This incident throws a harsh spotlight on the rapidly escalating problem of deepfake technology and the unauthorized use of celebrity likenesses, a concern increasingly impacting public figures and stirring debate across the United States.

The fake images, which depicted Eilish in various elaborate outfits supposedly on the Met Gala red carpet, quickly went viral across platforms like X (formerly Twitter) and Instagram. Many fans initially believed them to be real, underscoring the sophisticated nature of current AI image generation tools. However, Eilish took to her own channels to set the record straight. “Those are FAKE, that’s AI,” she reportedly wrote, expressing her disgust at the digitally manipulated pictures. “It’s sickening to me how easily people are fooled.” Her frustration highlights a growing unease about how AI can distort reality, a problem also seen with other AI systems, such as Elon Musk’s Grok AI spreading misinformation.

This latest instance of Billie Eilish AI fakes is far from an isolated event. The proliferation of deepfake technology, which uses artificial intelligence to create realistic but fabricated images and videos, has become a major concern. Celebrities are frequent targets, with their images often used without consent in various contexts, from harmless parodies to malicious hoaxes and even non-consensual pornography. The ease with which these fakes can be created and disseminated poses a significant threat to personal reputation and public trust. The entertainment industry is grappling with AI on multiple fronts, including stars urging for copyright protection against AI.

The “Sickening” Reality of AI-Generated Content

Eilish’s strong condemnation of the Billie Eilish AI fakes reflects a broader sentiment among artists and public figures who feel increasingly vulnerable to digital manipulation. The incident raises critical questions about:

  • Consent and Likeness: The unauthorized use of a person’s image, even if AI-generated, infringes on their rights and control over their own persona.
  • The Spread of Misinformation: When AI fakes are believable enough to dupe the public, they become potent tools for spreading false narratives.
  • The Difficulty in Detection: As AI technology advances, telling real from fake becomes increasingly challenging for the average internet user. This is a concern that even tech giants are trying to address, with OpenAI recently committing to more transparency about AI model errors.

The Met Gala, known for its high fashion and celebrity attendance, is a prime target for such fabrications due to the intense public interest and the visual nature of the event. The Billie Eilish AI fakes serve as a stark reminder that even high-profile events are not immune to this form of digital deception. The potential for AI to be misused is a widespread concern, touching various aspects of life, including the use of AI by police forces.

Legal and ethical frameworks are struggling to keep pace with the rapid advancements in AI. While some jurisdictions are beginning to explore legislation to combat malicious deepfakes, the global and often anonymous nature of the internet makes enforcement difficult. For victims like Billie Eilish, speaking out is one of the few recourses available to debunk the fakes and raise awareness. As AI becomes more integrated into content creation, the lines between authentic and synthetic media will continue to blur, making critical thinking and media literacy more important than ever for consumers. The public’s desire for authenticity is also pushing for clearer identification, like the calls for AI chatbots to disclose their non-human status.

What are your thoughts on the rise of AI-generated fakes and their impact on celebrities and public trust? Share your comments below and follow Briskfeeds.com for ongoing coverage of AI, technology, and misinformation.

Continue Reading

Cybersecurity

Alleged 89 Million Steam 2FA Codes Leaked, Twilio Denies Breach

Published

on

Alleged 89 Million Steam 2FA Codes Leaked Online

On May 14, 2025, an alleged database containing 89 million Steam 2FA (two-factor authentication) codes surfaced online, prompting immediate attention. This incident, which has not been verified by official sources, marks a significant event in the realm of digital security.

The alleged leak claimed to include sensitive details such as Steam account names, email addresses, and 2FA codes. These codes, crucial for securing user accounts, were said to be part of a database advertised on a hacking forum for $5,000. The data reportedly contained historic SMS text messages with one-time passcodes, including recipient phone numbers, confirmation codes for account access, and metadata like timestamps and delivery statuses. If authentic, this information could expose users to phishing attacks and session hijacking, where hackers might intercept or replay 2FA codes to bypass login protections.

Following the emergence of these claims, Twilio, a communications platform reportedly involved, denied any breach. The company stated that it found no evidence of a breach on its systems, dismissing the notion that the data originated from their platforms. This denial is significant, as Twilio provides authentication services for many platforms, including Steam. The incident, if verified, could have far-reaching consequences, prompting a reevaluation of how cybersecurity measures are implemented.

As of now, Steam, operated by Valve Corporation, has not yet commented on the alleged breach. This lack of response left users uncertain about the safety of their accounts, amplifying concerns about personal information and account security. The incident highlights the broader challenge of maintaining user trust in an era where digital threats are increasingly sophisticated, especially as AI-driven public safety tools continue to evolve.

The ongoing focus is on verifying the leak’s authenticity and understanding its implications. This event serves as a stark reminder of the importance of robust cybersecurity in the age of AI, prompting a reevaluation of how these technologies are deployed. What are your thoughts on the alleged Steam 2FA code leak and Twilio’s denial—does it signal a broader issue with online security, or is it an isolated incident? Share your insights in the comments; we’re eager to hear your perspective on this developing story.

Continue Reading

Cybersecurity

New Phishing Attack Uses Blob URLs to Steal Passwords

Published

on

Phishing attack Blob URLs steal passwords

Cybercriminals have developed a sophisticated phishing technique that leverages Blob URLs to create fake login pages within users’ browsers, stealing passwords and even encrypted messages, according to a Hackread report. This method, uncovered by Cofense Intelligence, bypasses traditional email security systems by generating malicious content locally, making it nearly undetectable. As phishing attacks grow more advanced, this new tactic highlights the urgent need for updated defenses and user awareness to protect sensitive data.

The attack begins with a phishing email that appears legitimate, often redirecting users through trusted platforms like Microsoft’s OneDrive before leading them to a fake login page. Unlike typical phishing sites hosted on external servers, these fake pages are created using Blob URLs—temporary local content generated within the user’s browser. TechRadar explains that because Blob URLs are not hosted on the public internet, security systems that scan emails for malicious links cannot easily detect them. The result is a convincing login page that captures credentials, such as passwords for tax accounts or encrypted messages, as detailed by Forbes. This stealthy approach mirrors trends in AI-driven cyber threats, where attackers exploit technology to evade detection.

Cofense Intelligence, as reported by Security Boulevard, first detected this technique in mid-2022, but its use has surged recently. The phishing campaigns often lure users with prompts to log in to view encrypted messages, access tax accounts, or review financial alerts, exploiting trust in familiar brands. Cybersecurity News highlights that Blob URLs start with “blob:http://” or “blob:https://”, a detail users can check to identify potential threats. However, the complexity of these attacks makes them hard to spot, especially since AI-based security tools are still learning to differentiate between legitimate and malicious Blob URLs, a challenge also seen in AI privacy debates about evolving tech risks.

Protecting against this threat requires a multi-layered approach. Experts recommend avoiding clicking on links in unsolicited emails, especially those prompting logins, and verifying URLs directly with trusted sources. Using two-factor authentication (2FA) can add an extra layer of security, even if credentials are stolen. Organizations should also invest in advanced email security solutions that can detect unusual redirect patterns, as traditional Secure Email Gateways (SEGs) often fail to catch these attacks, per Security Boulevard. These protective measures align with strategies in AI communication tools, which aim to secure user interactions in digital spaces.

The broader implications of Blob URL phishing are significant. As remote work and digital transactions increase, the risk of credential theft grows, potentially leading to financial fraud or data breaches. The digital divide further complicates the issue, as not all users have the tools or knowledge to recognize such threats, a concern echoed in AI accessibility efforts. Additionally, the misuse of legitimate technologies like Blob URLs—commonly used by services like YouTube for temporary video storage—underscores the need for better regulation, a topic often discussed in AI language tool discussions about ethical tech deployment.

This new phishing tactic serves as a wake-up call for both users and security providers. As cybercriminals continue to innovate, staying ahead requires constant vigilance, improved technology, and widespread education on digital safety. The rise of Blob URL attacks highlights the evolving nature of cyber threats and the importance of proactive defense strategies. What do you think about this sneaky phishing method—how can we better protect ourselves online? Share your thoughts in the comments—we’d love to hear your perspective on this growing cybersecurity challenge.

Continue Reading

Most Popular