Connect with us

Cybersecurity

Co-op Shuts Down IT Systems After Hacking Attempt, Following M&S Cyber Attack

Published

on

Co-op shuts down IT systems after hacking attempt

April 30, 2025 – The UK retailer Co-op has been forced to shut down parts of its IT systems after detecting a hacking attempt, just days after Marks & Spencer (M&S) faced a significant cyber attack that disrupted its operations. The incident, which Co-op described as an attempt to gain unauthorized access, has raised concerns about the growing wave of cyber threats targeting retailers, prompting the company to take preemptive measures to protect its systems. As cybersecurity becomes a critical issue for businesses, Co-op’s response highlights the challenges of safeguarding digital infrastructure in an increasingly vulnerable landscape.

Co-op, which operates over 2,000 grocery stores, 800 funeral parlours, and legal and financial services, announced the partial IT shutdown in a letter to staff on April 29. A report from The Guardian noted that the company “pre-emptively withdrew access to some systems” to ensure their safety, impacting back-office operations and call centre services. While Co-op’s stores, including rapid home deliveries, and funeral homes remain operational, the shutdown has affected behind-the-scenes functions such as stock updates, which rely on head office support. A spokesperson confirmed the incident, stating that Co-op had “recently experienced attempts to gain unauthorised access,” leading to proactive measures to protect its systems.

The timing of the hacking attempt is notable, coming shortly after M&S suffered a major cyber attack linked to the hacking collective Scattered Spider, which disrupted product availability in some stores. A BBC article reported that Co-op did not confirm whether its increased vigilance was a direct response to the M&S incident, but the company emphasized that “protecting our systems is of paramount importance.” The proximity of these incidents underscores the heightened risk facing retailers, particularly as they adopt technology-driven solutions like electronic shelf-edge pricing and online grocery deliveries to improve efficiency and combat issues like shoplifting.

Impact and Context of the Co-op Hacking Attempt

Here’s a summary of the incident:

  • Hacking Attempt: Co-op detected unauthorized access attempts, leading to a partial IT shutdown.
  • Impact: Affected back-office operations and call centre services, but stores and funeral homes remain operational.
  • Context: Follows a major cyber attack on M&S, linked to the Scattered Spider hacking group.
  • Response: Co-op took preemptive measures to protect its systems, with no reported data breaches.

The Co-op incident is part of a broader trend of cyber attacks targeting UK retailers in recent years. It was reported that Morrisons faced a cyber incident via its tech supplier Blue Yonder in late 2024, while WH Smith suffered a breach in 2023 that exposed employee data. These attacks highlight the vulnerabilities in retail supply chains, where interconnected systems create multiple entry points for hackers. In Co-op’s case, the company’s swift action to shut down parts of its IT infrastructure likely prevented a more severe breach, but it also disrupted operations, illustrating the delicate balance between security and functionality.

Co-op’s adoption of technology to streamline operations—such as electronic shelf-edge pricing and fast-track online deliveries—has made it a target for cyber criminals seeking to exploit digital systems. It was reported that the shutdown impacted virtual desktops across the business, affecting teams that manage stock updates and legal services. While Co-op has not disclosed the specific nature of the hacking attempt, the incident follows a pattern of attacks on retailers, often involving ransomware or data theft. The M&S attack, for instance, led to product shortages, showing how cyber incidents can have tangible effects on customer experiences.

The broader cybersecurity landscape for retailers is increasingly perilous, with cybercrime expected to cost the global economy $10.5 trillion annually by 2025, according to IBM’s X-Force Threat Intelligence Index. Retailers, with their large customer bases and interconnected systems, are prime targets for hackers seeking financial gain or data extortion. It was reported that the Scattered Spider group, suspected in the M&S attack, has a history of targeting retail and tech firms, often using sophisticated social engineering tactics to gain access. Co-op’s proactive response may have mitigated a similar outcome, but it underscores the need for robust cybersecurity measures to protect against future threats.

The Co-op hacking attempt serves as a stark reminder of the cybersecurity challenges facing retailers in the digital age. While the company’s quick action likely prevented a more serious breach, the incident highlights the vulnerabilities inherent in technology-driven operations. As retailers continue to innovate, balancing efficiency with security will be critical to maintaining customer trust and operational stability. The coming months will reveal whether Co-op and other retailers can strengthen their defenses against the rising tide of cyber threats. What’s your take on the growing cyber risks for retailers? How can companies like Co-op better protect their systems? Share your thoughts in the comments, and let’s discuss the future of cybersecurity in retail.

Liam Chen is a cybersecurity analyst with a background in information security and risk management. He has worked with various organizations to enhance their cyber defense strategies. At BriskFeeds, Liam reports on cyber threats, data protection, and the intersection of technology and security policies.

Cybersecurity

Billie Eilish AI Fakes Flood Internet: Singer Slams “Sickening” Doctored Met Gala Photos

Published

on

Billie Eilish AI Fakes: Singer Denounces Doctored Met Gala Pics

Billie Eilish AI fakes are the latest example of deepfake technology running rampant, as the Grammy-winning singer has publicly debunked viral images claiming to show her at the 2025 Met Gala. Eilish, who confirmed she did not attend the star-studded event, called the AI-generated pictures “sickening,” highlighting the growing crisis of celebrity image misuse and online misinformation in the USA and beyond.

LOS ANGELES, USA – The internet was abuzz with photos seemingly showing Billie Eilish at the 2025 Met Gala, but the singer herself has forcefully shut down the rumors, revealing the Billie Eilish AI fakes were entirely fabricated. In a recent social media statement, Eilish confirmed she was nowhere near the iconic fashion event and slammed the AI-generated images as “sickening.” This incident throws a harsh spotlight on the rapidly escalating problem of deepfake technology and the unauthorized use of celebrity likenesses, a concern increasingly impacting public figures and stirring debate across the United States.

The fake images, which depicted Eilish in various elaborate outfits supposedly on the Met Gala red carpet, quickly went viral across platforms like X (formerly Twitter) and Instagram. Many fans initially believed them to be real, underscoring the sophisticated nature of current AI image generation tools. However, Eilish took to her own channels to set the record straight. “Those are FAKE, that’s AI,” she reportedly wrote, expressing her disgust at the digitally manipulated pictures. “It’s sickening to me how easily people are fooled.” Her frustration highlights a growing unease about how AI can distort reality, a problem also seen with other AI systems, such as Elon Musk’s Grok AI spreading misinformation.

This latest instance of Billie Eilish AI fakes is far from an isolated event. The proliferation of deepfake technology, which uses artificial intelligence to create realistic but fabricated images and videos, has become a major concern. Celebrities are frequent targets, with their images often used without consent in various contexts, from harmless parodies to malicious hoaxes and even non-consensual pornography. The ease with which these fakes can be created and disseminated poses a significant threat to personal reputation and public trust. The entertainment industry is grappling with AI on multiple fronts, including stars urging for copyright protection against AI.

The “Sickening” Reality of AI-Generated Content

Eilish’s strong condemnation of the Billie Eilish AI fakes reflects a broader sentiment among artists and public figures who feel increasingly vulnerable to digital manipulation. The incident raises critical questions about:

  • Consent and Likeness: The unauthorized use of a person’s image, even if AI-generated, infringes on their rights and control over their own persona.
  • The Spread of Misinformation: When AI fakes are believable enough to dupe the public, they become potent tools for spreading false narratives.
  • The Difficulty in Detection: As AI technology advances, telling real from fake becomes increasingly challenging for the average internet user. This is a concern that even tech giants are trying to address, with OpenAI recently committing to more transparency about AI model errors.

The Met Gala, known for its high fashion and celebrity attendance, is a prime target for such fabrications due to the intense public interest and the visual nature of the event. The Billie Eilish AI fakes serve as a stark reminder that even high-profile events are not immune to this form of digital deception. The potential for AI to be misused is a widespread concern, touching various aspects of life, including the use of AI by police forces.

Legal and ethical frameworks are struggling to keep pace with the rapid advancements in AI. While some jurisdictions are beginning to explore legislation to combat malicious deepfakes, the global and often anonymous nature of the internet makes enforcement difficult. For victims like Billie Eilish, speaking out is one of the few recourses available to debunk the fakes and raise awareness. As AI becomes more integrated into content creation, the lines between authentic and synthetic media will continue to blur, making critical thinking and media literacy more important than ever for consumers. The public’s desire for authenticity is also pushing for clearer identification, like the calls for AI chatbots to disclose their non-human status.

What are your thoughts on the rise of AI-generated fakes and their impact on celebrities and public trust? Share your comments below and follow Briskfeeds.com for ongoing coverage of AI, technology, and misinformation.

Continue Reading

Cybersecurity

Alleged 89 Million Steam 2FA Codes Leaked, Twilio Denies Breach

Published

on

Alleged 89 Million Steam 2FA Codes Leaked Online

On May 14, 2025, an alleged database containing 89 million Steam 2FA (two-factor authentication) codes surfaced online, prompting immediate attention. This incident, which has not been verified by official sources, marks a significant event in the realm of digital security.

The alleged leak claimed to include sensitive details such as Steam account names, email addresses, and 2FA codes. These codes, crucial for securing user accounts, were said to be part of a database advertised on a hacking forum for $5,000. The data reportedly contained historic SMS text messages with one-time passcodes, including recipient phone numbers, confirmation codes for account access, and metadata like timestamps and delivery statuses. If authentic, this information could expose users to phishing attacks and session hijacking, where hackers might intercept or replay 2FA codes to bypass login protections.

Following the emergence of these claims, Twilio, a communications platform reportedly involved, denied any breach. The company stated that it found no evidence of a breach on its systems, dismissing the notion that the data originated from their platforms. This denial is significant, as Twilio provides authentication services for many platforms, including Steam. The incident, if verified, could have far-reaching consequences, prompting a reevaluation of how cybersecurity measures are implemented.

As of now, Steam, operated by Valve Corporation, has not yet commented on the alleged breach. This lack of response left users uncertain about the safety of their accounts, amplifying concerns about personal information and account security. The incident highlights the broader challenge of maintaining user trust in an era where digital threats are increasingly sophisticated, especially as AI-driven public safety tools continue to evolve.

The ongoing focus is on verifying the leak’s authenticity and understanding its implications. This event serves as a stark reminder of the importance of robust cybersecurity in the age of AI, prompting a reevaluation of how these technologies are deployed. What are your thoughts on the alleged Steam 2FA code leak and Twilio’s denial—does it signal a broader issue with online security, or is it an isolated incident? Share your insights in the comments; we’re eager to hear your perspective on this developing story.

Continue Reading

Cybersecurity

New Phishing Attack Uses Blob URLs to Steal Passwords

Published

on

Phishing attack Blob URLs steal passwords

Cybercriminals have developed a sophisticated phishing technique that leverages Blob URLs to create fake login pages within users’ browsers, stealing passwords and even encrypted messages, according to a Hackread report. This method, uncovered by Cofense Intelligence, bypasses traditional email security systems by generating malicious content locally, making it nearly undetectable. As phishing attacks grow more advanced, this new tactic highlights the urgent need for updated defenses and user awareness to protect sensitive data.

The attack begins with a phishing email that appears legitimate, often redirecting users through trusted platforms like Microsoft’s OneDrive before leading them to a fake login page. Unlike typical phishing sites hosted on external servers, these fake pages are created using Blob URLs—temporary local content generated within the user’s browser. TechRadar explains that because Blob URLs are not hosted on the public internet, security systems that scan emails for malicious links cannot easily detect them. The result is a convincing login page that captures credentials, such as passwords for tax accounts or encrypted messages, as detailed by Forbes. This stealthy approach mirrors trends in AI-driven cyber threats, where attackers exploit technology to evade detection.

Cofense Intelligence, as reported by Security Boulevard, first detected this technique in mid-2022, but its use has surged recently. The phishing campaigns often lure users with prompts to log in to view encrypted messages, access tax accounts, or review financial alerts, exploiting trust in familiar brands. Cybersecurity News highlights that Blob URLs start with “blob:http://” or “blob:https://”, a detail users can check to identify potential threats. However, the complexity of these attacks makes them hard to spot, especially since AI-based security tools are still learning to differentiate between legitimate and malicious Blob URLs, a challenge also seen in AI privacy debates about evolving tech risks.

Protecting against this threat requires a multi-layered approach. Experts recommend avoiding clicking on links in unsolicited emails, especially those prompting logins, and verifying URLs directly with trusted sources. Using two-factor authentication (2FA) can add an extra layer of security, even if credentials are stolen. Organizations should also invest in advanced email security solutions that can detect unusual redirect patterns, as traditional Secure Email Gateways (SEGs) often fail to catch these attacks, per Security Boulevard. These protective measures align with strategies in AI communication tools, which aim to secure user interactions in digital spaces.

The broader implications of Blob URL phishing are significant. As remote work and digital transactions increase, the risk of credential theft grows, potentially leading to financial fraud or data breaches. The digital divide further complicates the issue, as not all users have the tools or knowledge to recognize such threats, a concern echoed in AI accessibility efforts. Additionally, the misuse of legitimate technologies like Blob URLs—commonly used by services like YouTube for temporary video storage—underscores the need for better regulation, a topic often discussed in AI language tool discussions about ethical tech deployment.

This new phishing tactic serves as a wake-up call for both users and security providers. As cybercriminals continue to innovate, staying ahead requires constant vigilance, improved technology, and widespread education on digital safety. The rise of Blob URL attacks highlights the evolving nature of cyber threats and the importance of proactive defense strategies. What do you think about this sneaky phishing method—how can we better protect ourselves online? Share your thoughts in the comments—we’d love to hear your perspective on this growing cybersecurity challenge.

Continue Reading

Most Popular