business resources
AI Security Breaches on the Rise: How 73% of Enterprises Are Falling Victim
28 Nov 2025, 11:22 am GMT
Recent data reveals that a staggering 73% of enterprises experienced AI-related security incidents in the past year, with breaches averaging a costly $4.8 million per event. Aras Nazarovas, Senior Information Security Researcher at Cybernews, warns that such incidents often stem from simple, preventable security oversights, as organisations neglect to apply essential safeguards to their AI systems.
As AI technology continues to evolve and integrate into businesses globally, the security implications of these systems become more apparent. In 2023 alone, 73% of enterprises encountered at least one AI-related security incident.
The financial impact of these breaches is not to be underestimated, with the average cost of a single AI security incident reaching $4.8 million. Yet, despite these alarming statistics, the underlying causes of many breaches can be traced to basic security failures rather than sophisticated cyber-attacks.
Aras Nazarovas, Senior Information Security Researcher at Cybernews, highlighted these issues by pointing to several high-profile examples, such as the Vyro AI incident, where a simple oversight, an unsecured Elasticsearch server, led to the exposure of sensitive data. This incident, and others like it, show that the risks posed by AI are not only complex but also exacerbated by human error and technical incompetence.
The growing AI security crisis
In recent years, the surge in AI adoption across industries has brought numerous benefits, including improved operational efficiency and enhanced decision-making. However, with these advancements come significant risks, particularly in the realm of cybersecurity. As AI systems become integral to business processes, traditional security frameworks are often inadequate to address the new types of threats these systems pose.
Simple security mistakes, such as leaving a database open to the public, have led to major data leaks. For example, Vyro AI left an Elasticsearch server exposed for months, containing sensitive data such as prompts, tokens, and user information. This incident, while not the result of a sophisticated attack, highlights the vulnerabilities present in AI systems.
“Cybercriminals are becoming more sophisticated, but the leak connected to Vyro AI is not that. It proves that a simple mistake, like leaving a database open to everyone, can expose user data to attackers for months. And it could have been avoided if it had been given more attention,” Nazarovas explains. “
Why traditional security frameworks aren’t enough
AI systems are different from traditional software in several key ways. For one, their unpredictable data flows, processing behaviours, and constant learning cycles mean they don’t always adhere to the security norms that have served traditional systems well. Traditional security practices fail to account for the complexity of these AI systems, which can present an expanded attack surface.
One of the most concerning threats in AI security is prompt injection, where attackers manipulate the input to an AI system to get it to behave in an unintended way. This can lead to the exposure of sensitive data, even if the attacker has no sophisticated technical skills. All they need is a knack for crafting the right type of prompt.
“Attackers can manipulate AI responses by crafting prompts, leading to unauthorised access to user data. It requires no specialised technical skills, only the ability to craft persuasive language that influences the system’s behaviour,” Nazarovas says.
The human element: A risk factor for AI security
Human error remains a leading factor behind AI security breaches. Employees, often unwittingly, fail to implement basic security protocols, exposing sensitive data. As AI systems become more complex, the need for comprehensive security training and policies becomes critical. While educating employees about the risks of data input and secure AI usage is essential, these measures alone are not sufficient.
“It’s your job to minimise the risk, starting from the basics, and no, that does not mean you should stop using AI. You should use it more wisely. Before I type anything into a chatbot, I often ask myself, ‘Would I be okay if this info were leaked tomorrow?” Nazarovas advises.
Key strategies for enhancing AI security
Organisations must start by integrating AI security into their broader cybersecurity strategies. One essential first step is implementing role-based training for employees, with pre-approved prompt templates or scenario-based exercises. This can limit exposure to high-risk tools and encourage the use of safer, authorised alternatives.
However, security measures cannot rely solely on human training. Tools must be in place to enforce these practices. For instance, AI traffic should be routed through Cloud Access Security Brokers (CASB) and Secure Service Edge (SSE), and Data Loss Prevention (DLP) should be enabled on both input prompts and output responses. Additionally, sensitive data should be masked or redacted, and logs should be encrypted by default.
“The bottom line is that you should not blindly trust your employees. Set clear rules and use necessary tools. Data deserves protection, and until companies face consequences, everyone will continue to be surprised when another ‘sophisticated’ attack is left to be simple negligence,” Nazarovas emphasises.
Preparing for the future: AI is not going away
The future of business is undoubtedly intertwined with AI, and organisations must adapt their security measures accordingly. With AI's ability to process vast amounts of data and automate decision-making, its potential is enormous. However, the risks associated with its use are equally vast. As more businesses integrate AI into their operations, the importance of a robust AI security framework cannot be overstated.
“We must develop a comprehensive and globally shared view of how technology is affecting our lives and reshaping our economic, social, cultural, and human environments. There has never been a time of greater promise, or greater peril,” warns Klaus Schwab, Founder and Executive Chairman of the World Economic Forum.
As the lines between traditional security and AI security continue to blur, companies must not just prepare for future AI-powered threats—they must take immediate action to secure their AI systems from the simple errors that have already caused costly breaches.
About Cybernews
Cybernews is an independent online publication that provides in-depth analysis, research, and reports on cybersecurity threats and vulnerabilities. The team conducts over 7,000 investigations annually, helping businesses and consumers understand the risks associated with online security and data privacy.
With a strong focus on transparency and accuracy, Cybernews offers valuable insights to protect digital assets and navigate the ever-evolving cybersecurity landscape. Their research is widely regarded for uncovering significant security flaws and privacy concerns, contributing to safer online environments worldwide.
Share this
Shikha Negi
Content Contributor
Shikha Negi is a Content Writer at ztudium with expertise in writing and proofreading content. Having created more than 500 articles encompassing a diverse range of educational topics, from breaking news to in-depth analysis and long-form content, Shikha has a deep understanding of emerging trends in business, technology (including AI, blockchain, and the metaverse), and societal shifts, As the author at Sarvgyan News, Shikha has demonstrated expertise in crafting engaging and informative content tailored for various audiences, including students, educators, and professionals.
previous
TerraPay Launches Xend: Revolutionising Global Payments with Seamless Wallet Interoperability
next
How Can You Use Elementor’s Free Website Builder Templates?