AI
Alibaba Warns of AI Bubble in US Tech Sector Amid Hiring Push

March 26, 2025, 03:45 AM PDT – Alibaba Group Chairman Joe Tsai has raised alarms about an emerging AI bubble in the U.S. tech sector, a warning that comes as the Chinese tech giant resumes hiring after a two-year freeze. Announced on March 25, 2025, the statement highlights growing concerns about AI bubble risks in US tech sector, a topic resonating with American investors, tech workers, and policymakers navigating the rapid rise of artificial intelligence.
Tsai pointed to overinflated valuations and speculative investments in U.S. AI startups as key indicators of a bubble, drawing parallels to the dot-com boom of the late 1990s. “The hype around AI has driven valuations to unsustainable levels,” Tsai said during a conference call, as reported by Reuters. He noted that many U.S. AI companies are burning through cash without clear paths to profitability, a trend that could lead to a market correction. This perspective is particularly striking given Alibaba’s own AI ambitions, as the company plans to hire more staff to support its AI initiatives, focusing on job creation for China’s 13 million annual college graduates.
The U.S. tech sector has seen a surge in AI investment, with venture capital funding for AI startups reaching $55 billion in 2024, according to CB Insights. Companies like xAI and Anthropic have raised billions, often at valuations exceeding $20 billion, despite limited revenue streams. Tsai’s warning aligns with concerns from U.S. economists, who note that the AI sector’s growth mirrors past bubbles, such as the 2017 cryptocurrency craze. “Investors are chasing the next big thing, but many of these startups lack sustainable business models,” said Dr. Laura Bennett, an economist at Stanford University, in an interview with this outlet. She pointed to the high failure rate of AI startups—over 60% within three years—as a red flag for the U.S. market.
Alibaba’s hiring push, meanwhile, reflects confidence in China’s economic policies, which have encouraged tech growth through subsidies and talent development programs. The company aims to bolster its AI capabilities, particularly in e-commerce and cloud computing, where AI can optimize logistics and personalize shopping experiences. This contrast with the U.S. market underscores a key difference: while China focuses on integrating AI into established industries, the U.S. is betting heavily on speculative innovation, a strategy that could backfire if the bubble bursts.
For U.S. tech workers, Tsai’s warning raises fears of job instability. The AI sector employs over 200,000 people in the U.S., according to the Bureau of Labor Statistics, and a market correction could lead to widespread layoffs. Investors are also on edge, as a downturn could wipe out billions in market value, affecting retirement funds and institutional portfolios. Small businesses that rely on AI tools for marketing or operations might face higher costs if startups fail, disrupting their growth plans.
The U.S. government is taking note. Policymakers are exploring regulations to curb speculative investments in AI, potentially requiring startups to disclose more financial data to investors. This could stabilize the market but might also stifle innovation, a concern for Silicon Valley leaders. “We need to support AI growth without fueling a bubble,” said Senator Maria Cantwell in a recent statement, reflecting the delicate balance facing U.S. regulators.
Alibaba’s perspective also highlights the global nature of the AI race. While the U.S. leads in AI innovation, China’s focus on practical applications could give it an edge if the U.S. market falters. American tech firms are now under pressure to prove their AI investments are sustainable, possibly by shifting focus to real-world applications like healthcare or education, where AI can deliver measurable benefits.
What’s next? The U.S. tech sector faces a critical juncture as investors and policymakers grapple with the risks of an AI bubble. Alibaba’s warning serves as a wake-up call, urging the industry to prioritize sustainability over hype, with significant implications for the future of AI in America.
AI
OpenAI Launches Image Generation API, Bringing DALL-E Powers to Developers

OpenAI has released its advanced image generation technology as an API, allowing developers to integrate the powerful AI image creation capabilities directly into their applications. This move significantly expands access to the technology previously available primarily through ChatGPT and other OpenAI-controlled interfaces.
The newly released API gives developers programmatic access to the same image generation model that powers ChatGPT’s visual creation tools. Companies can now incorporate sophisticated AI image generation into their own applications without requiring users to interact with OpenAI’s platforms directly.
“We’re making our image generation models available via API, allowing developers to easily integrate image generation into their applications,” OpenAI stated in its announcement. The company emphasized that the API has been designed with both performance and responsibility in mind, implementing safety systems similar to those used in their consumer-facing products.
The image generation API supports a wide range of capabilities, including creating images from text descriptions, editing existing images with text instructions, and generating variations of uploaded images. Developers can specify parameters such as image size, style preferences, and quality levels to customize outputs for their specific use cases.
Major software companies have already begun implementing the technology. Design and creative software leaders like Adobe and Figma are among the first partners to integrate the API into their products, enabling users to generate images directly within their existing workflows rather than switching between multiple applications.
The API operates on a usage-based pricing model, with costs calculated based on factors including image resolution, generation complexity, and volume. Enterprise customers with specialized needs can access custom pricing plans and dedicated support channels, while smaller developers can get started with standard plans.
Security and content moderation remain central to the implementation. OpenAI has incorporated safety mechanisms to prevent the generation of harmful, illegal, or deceptive content. The system includes filters for violent, sexual, and hateful imagery, as well as protections against creating deepfakes of real individuals without proper authorization.
“This represents a significant step in making advanced AI capabilities more accessible to developers of all sizes,” said technology analyst Maria Rodriguez. “Previously, building this level of image generation required massive resources and expertise that most companies simply didn’t have.”
Industry experts note that the API’s release will likely accelerate the integration of AI-generated imagery across a wide range of applications, from e-commerce product visualization to educational tools and creative software. The programmable nature of the API allows for more customized and contextual image generation compared to using standalone tools.
For enterprises looking to incorporate image generation into their products, the API offers advantages including reduced latency, customization options, and the ability to maintain users within their own ecosystems rather than redirecting them to external AI tools.
The release comes amid growing competition in the AI image generation space, with competitors like Midjourney, Stable Diffusion, and Google’s image generation models all vying for developer and enterprise adoption. OpenAI’s strong brand recognition and the widespread familiarity with DALL-E through ChatGPT give it certain advantages, though pricing and performance factors will influence adoption rates.
Developers interested in implementing the image generation API can access documentation and begin integration through OpenAI’s developer portal. The company provides code examples in popular programming languages and comprehensive guides for common use cases to streamline the implementation process.
OpenAI emphasizes that all API users must adhere to their usage policies, which prohibit applications that could cause harm or violate the rights of others. The company maintains the ability to monitor API usage and can suspend access for applications that violate these terms.
As AI-generated imagery becomes increasingly mainstream, ethical considerations around disclosure and transparency continue to evolve. Many platforms require or encourage disclosure when AI-generated images are used commercially, and OpenAI recommends that developers implement similar transparency measures in their applications.
The API release represents OpenAI’s continued strategy of first developing advanced AI capabilities for direct consumer use before making them available as programmable services for the broader developer ecosystem. This approach allows the company to refine its models and safety systems before wider deployment while maintaining some level of oversight regarding how its technology is implemented.
AI
Columbia Student Suspended for AI Cheating Tool Secures $5.3M in Funding

Former Columbia University student Jordan Chong has transformed academic punishment into entrepreneurial opportunity by securing $5.3 million in seed funding for his controversial AI startup. The 21-year-old, who was suspended from the prestigious university for creating an AI interview cheating tool, has now founded Cluely, a company focused on developing AI tools for interview assistance.
“I got kicked out of Columbia for building an AI tool that helped me cheat on class interviews,” Chong stated in recent interviews. Rather than abandoning his project after facing serious academic consequences, the young entrepreneur refined his technology and attracted significant investor interest.
According to TechCrunch, the $5.3 million seed round was led by Founders Fund, with participation from several angel investors who recognized potential in Cluely’s approach to AI-assisted communication. This funding success comes during a challenging period for AI startups, with venture capital investments in the sector showing notable decline in recent months.
Cluely is out. cheat on everything. pic.twitter.com/EsRXQaCfUI
— Roy (@im_roy_lee) April 20, 2025
The Technology Behind Cluely
Cluely’s technology analyzes patterns in interview questions and generates contextually appropriate responses based on an extensive database of successful answers. The system can provide real-time suggestions during interviews, helping users respond more effectively to unexpected questions.
The application initially focused on academic settings but has expanded to cover job interviews and other professional assessments. Users can access Cluely’s suggestions through mobile applications and browser extensions designed to operate discreetly during interview situations.
“Our technology isn’t just about providing answers,” Chong explains. “It’s about augmenting human capabilities in situations where people often struggle to perform their best due to anxiety or limited preparation time.”
Reports from Digital Watch indicate that the tool works by analyzing patterns in interview questions and generating contextually appropriate responses. Users can access these suggestions through various interfaces, enabling what some consider an unfair advantage in assessment situations.
Ethical Concerns and Academic Integrity
The emergence and funding of Cluely has sparked intense debate within educational and professional communities. Academic institutions, including Columbia University, have expressed concerns about tools that potentially undermine the integrity of assessment processes.
“When we evaluate students or job candidates, we’re trying to gauge their actual knowledge and abilities,” explained Dr. Michael Chen, Dean of Student Affairs at a prominent East Coast university. “Tools that artificially enhance performance risk making these assessments meaningless.”
Maeil Business Newspaper reports that many universities are already adapting their interview processes to counter AI-assisted cheating. Some have implemented stricter monitoring protocols, while others are moving toward assessment methods that are more difficult to circumvent with AI assistance.
Educational technology experts suggest that Cluely represents a new frontier in the ongoing balance between assessment integrity and technological advancement. “We’ve dealt with calculators, internet access, and basic AI tools,” noted education technology researcher Dr. Lisa Rodriguez. “But real-time interview assistance takes these challenges to a completely different level.”
Growing Market Despite Controversy
Despite ethical concerns, market analysts predict significant growth in AI-assisted communication tools. The global market for such technologies is projected to reach $15 billion by 2027, according to recent industry reports.
Cluely is positioning itself at the forefront of this emerging sector. The company plans to use its newly secured funding to expand its team, enhance its core technology, and develop new features targeting various interview and assessment scenarios.
“We’re currently focused on interview preparation and assistance,” Chong stated, “but our vision extends to supporting all forms of high-stakes communication, from negotiation to public speaking and beyond.”
FirstPost highlights that while the company markets its product as an “AI communication assistant,” many educators view it as explicitly designed for cheating. This perception stems from Chong’s own admission about the tool’s origins and its “cheat on everything” tagline that has appeared in some marketing materials.
Regulatory Landscape and Future Challenges
As AI communication tools like Cluely gain traction, they face an evolving regulatory landscape. Several states are considering legislation that would require disclosure when AI assistance is used in academic or professional settings.
“We anticipate increased regulatory attention as our technology becomes more widespread,” acknowledged Chong. “We’re committed to working with regulators to find the right balance between innovation and protecting the integrity of assessment systems.”
Legal experts suggest that the coming years will see significant development in how AI-assisted communication tools are regulated, particularly in educational and employment contexts. Some predict requirements for disclosure when such tools are used, while others anticipate technical countermeasures to detect AI assistance.
Adapting Assessment Methods for the AI Era
The rise of tools like Cluely is forcing educational institutions and employers to reconsider traditional assessment methods. Many are already shifting toward evaluation approaches that are more difficult to game with AI assistance.
“We’re seeing increased interest in project-based assessments, collaborative problem-solving exercises, and demonstrations of skills in controlled environments,” explained Dr. Jennifer Wise, an expert in educational assessment. “The goal is to evaluate capabilities in ways that AI can’t easily enhance.”
Some forward-thinking organizations have embraced AI as part of the assessment process, explicitly allowing candidates to use AI tools while focusing evaluation on how effectively they leverage these resources.
The Future of Human-AI Collaboration
Beyond the immediate context of interviews and assessments, Cluely represents a broader trend toward AI-augmented human performance. This trend raises fundamental questions about how we define and value human capabilities in an era of increasingly sophisticated AI assistance.
For Chong and Cluely, these philosophical questions take a back seat to the immediate business opportunity. With $5.3 million in fresh funding, the company is poised for rapid growth despite its controversial origins.
As TechCrunch notes, Cluely’s success highlights the complex relationship between academic integrity and technological innovation. While educational institutions grapple with how to maintain assessment validity, entrepreneurs like Chong are capitalizing on the demand for tools that enhance human performance—regardless of the ethical implications.
AI
Instagram’s AI Teen Detection: The 2025 Surprise You’ll Want to See!

Instagram has rolled out a groundbreaking use of artificial intelligence (AI) to detect and protect teen users, a move announced in 2025 that’s catching global attention. This “adult classifier” system, developed by Meta, uses AI to identify users under 18 based on profile data, interactions, and even birthday posts, automatically enforcing stricter safety settings. Launched as part of a broader effort to safeguard younger audiences, it promises 98% accuracy in age estimation, but it’s sparking debates about privacy and parental control.
This isn’t just a tech update—it’s a worldwide story with implications for social media safety. With AI flagging teens, how will it balance protection and personal freedom? Let’s dive into the details.
The initiative addresses growing concerns about teen safety on social media, building on Meta’s prior investments in AI technology. The “adult classifier” analyzes signals like follower lists, content engagement, and friend messages—such as “happy 16th birthday”—to estimate age, as noted in a tech overview by Meta. This system, first tested in 2022, now enforces settings like private accounts and restricted messaging for detected teens, responding to pressure from parents and regulators worldwide.
The AI operates by processing vast datasets from user activity. It cross-references profile details with behavioral patterns, achieving a 98% accuracy rate in distinguishing users under 25, according to a detailed report on age detection. When it identifies a teen, Instagram applies safeguards: accounts default to private, adults can’t message them unless connected, and content filters block harmful material. This real-time adjustment aims to protect millions of users, but the scale of data involved raises questions.
Experts highlight the benefits. The 98% accuracy could shield teens from predators and mental health risks, a concern backed by an AP investigation into teen safety. For parents, it offers peace of mind, with options to monitor settings. Businesses might see safer platforms boost user trust, potentially increasing ad revenue. Yet, the system isn’t perfect—errors could misclassify adults as teens, limiting their access.
Risks are significant. AI relies on data, and biases in training sets—perhaps from uneven global representation—could lead to mistakes. Privacy advocates worry about the collection of personal data, including video selfies for verification, a method detailed in a Fast Company article on AI testing. Users can appeal, but the lack of transparency on data use is a gap. Cultural differences, like varying age norms, might also confuse the algorithm.
Public reaction adds a layer. On X, teens express frustration over lost control, with one posting, “AI deciding my account settings feels invasive.” Parents, however, praise the safety boost, calling it “long overdue.” This split, missing from official statements, could shape future tweaks, perhaps leading to opt-in options or clearer policies.
Long-term impact is another gap. Will AI detection hold up over years as teen behavior evolves? How will it handle new threats, like deepfakes or cyberbullying spikes? Meta plans a year-long review, but these questions linger. The system’s success could influence other platforms, like TikTok, to adopt similar tools.

Photo: Meta
As Instagram tests this AI teen detection, the world watches. Will it redefine social media safety or raise new privacy challenges? Early feedback will guide its future.
Share your views below. For more updates, visit briskfeeds.com.
-
AI3 months ago
DeepSeek AI Faces U.S. Government Ban Over National Security Concerns
-
Technology2 months ago
COVID-Like Bat Virus Found in China Raises Fears of Future Pandemics
-
Technology2 months ago
Pokémon Day 2025 Celebrations Set for February 27 With Special Pokémon Presents Livestream
-
AI2 months ago
Google Gemini Now Available on iPhone Lock Screens – A Game Changer for AI Assistants
-
Technology2 months ago
Bybit Suffers Record-Breaking $1.5 Billion Crypto Hack, Shaking Industry Confidence
-
Technology2 months ago
iPhone 17 Air and Pro Mockups Hint at Ultra-Thin Future, Per Leaked Apple Docs
-
Technology2 months ago
Apple Unveils New iPad Air with M3 Chip and Enhanced Magic Keyboard
-
Technology2 months ago
Yale Study Identifies Possible Links Between COVID Vaccine and Post-Vaccination Syndrome