business resources

73% of Enterprises Faced AI Breaches: What Organisations Must Fix Now

Himani Verma Content Contributor

25 Nov 2025, 1:48 pm GMT

73% of Enterprises Faced AI Breaches: What Organisations Must Fix Now
73% of Enterprises Faced AI Breaches: What Organisations Must Fix Now

73% of enterprises experienced AI-related security incidents last year, often caused by basic misconfigurations rather than advanced attacks. Cases like ChatGPT, Grok, and Vyro AI exposing sensitive data show how traditional security frameworks fail in AI environments. Organisations must enforce controls, secure infrastructure, strengthen transparency, and treat AI systems as Tier-1 assets to reduce preventable breaches and maintain trust.

Artificial intelligence is now deeply embedded in enterprise workflows, yet the rise in security incidents shows how unprepared many organisations remain. 

Researcher Aras Nazarovas examines how 73% of enterprises encountered at least one AI-related security issue in the past year, with recent exposures involving ChatGPT, Grok, and Vyro AI illustrating how simple oversights compromise sensitive data.

Aras, whose work focuses on uncovering large-scale privacy and security flaws across global platforms, highlights how traditional security frameworks fail to address AI’s unpredictable data flows and expanding attack surface. As organisations accelerate AI adoption, he argues that stronger foundations, clearer controls, and transparent practices are essential to prevent avoidable breaches and maintain trust in AI-driven systems.

Visible failures reveal systemic weaknesses

High-profile cases, such as ChatGPT and Grok conversations appearing in Google search results and exposing sensitive corporate information, demonstrate how fundamental configuration failures can compromise entire workflows.

A similar pattern emerges in the Vyro AI incident, where an Elasticsearch server was left fully open. The exposed dataset included prompts, tokens, and user agents. The absence of password protection, authentication requirements, or network restrictions meant the database functioned as “a data centre’s doors wide open for everyone to see.”

These issues highlight a growing concern for CTOs, CISOs, and other executives: most AI-related breaches are not the result of advanced intrusions, but simple security mistakes.

AI changes the attack surface

Traditional security frameworks do not fully translate to modern AI environments. AI systems operate under different principles, process data unpredictably, and often lack conventional boundaries.

One example is prompt injection, where attackers manipulate responses by crafting persuasive prompts. This does not require technical expertise—only an understanding of language patterns and system behaviour.

The impact is significant:

73% of enterprises encountered at least one AI-related security incident in the past year, with an average breach cost of $4.8 million. While organisations build defences against complex AI-powered attacks, basic misconfigurations often remain unresolved.

Human error vs. technical inattention

Human error remains common across digital systems, yet the Vyro AI exposure illustrates how neglecting basic practices puts millions of users at risk. The server remained unsecured for several months, reinforcing why many individuals hesitate before entering sensitive information into AI tools. Once data enters an external system, users lose control over where it may resurface.

Transparency is largely absent

Most AI service providers do not clearly disclose:

  • how user data is stored
  • who has access
  • how long information is retained
  • whether data trains the underlying model

This lack of openness becomes critical during incidents. Public responses often rely on vague explanations or ambiguous statements.

During the Tea App incident, Reddit users immediately questioned official claims. One user asked: “Was it just a poorly configured cloud bucket that allows public users to view and download data, meaning it was negligence and not force?”

Others highlighted inaccurate descriptions such as: “The information was stored in accordance with law enforcement requirements related to cyber-bullying,”
which community members identified as untrue.

The recurrence of such incidents shows a pattern: simple errors are frequently portrayed as “sophisticated attacks.”

Compliance begins with basic controls

Employee awareness programmes help, but they do not eliminate risk. Many users still enter confidential information into AI tools without considering exposure. Organisations need systematic and enforced controls.

Recommended steps include:

  • Role-based training with scenario prompts and pre-approved templates
  • Blocking high-risk AI tools while offering secure, authorised alternatives
  • Implementing technical guardrails, such as routing AI traffic through CASB/SSE systems
  • Activating DLP on prompts and outputs
  • Masking or redacting PII and secrets
  • Encrypting logs and reducing retention by default
  • Security leaders should aim to make the easiest path also the safest path.

Build infrastructure able to support AI

Enterprises planning to adopt AI should treat it as a Tier-1 data system. This includes:

  • selecting reputable vendors
  • validating retention settings and private modes
  • confirming that user data does not train the model
  • reviewing SOC 2, ISO and other security documentation

The objective is to create systems resilient to failures rather than hoping configurations remain intact.

Strengthening internal trust

Organisations must not rely blindly on staff adherence. Clear rules, supported by automated tooling, help reduce exposure. Data requires consistent protection, and until misconfigurations carry consequences, avoidable breaches will continue.

As Aras Nazarovas writes: “Before I type anything into a chatbot, I often ask myself, ‘Would I be okay if this info were leaked tomorrow?’”

About Cybernews

Cybernews is an independent, globally recognised media outlet established in 2019. Its team of journalists and security researchers investigates cybersecurity risks, uncovers vulnerabilities, and provides analysis on developments across the digital se curity landscape. Using white-hat investigative techniques, the Cybernews research team identifies and responsibly discloses security exposures affecting organisations, platforms, and millions of users worldwide.

Over the years, Cybernews has earned international attention for a series of high-impact discoveries. Researchers uncovered 16 billion leaked login credentials linked to infostealer malware, social media services, developer portals, and corporate networks. A large-scale analysis of 156,080 iOS apps revealed that 71% exposed sensitive data, highlighting systemic issues across the App Store ecosystem. The team also identified an unprotected Elasticsearch index containing personal details of the entire population of Georgia.

Further investigations include research into the Pixel 9 Pro XL, which showed that the device transmits user data to Google even before any apps are installed. Cybernews also reported on the MC2 Data leak, which affected one-third of the US population, and found that the 50 most popular Android apps request an average of 11 dangerous permissions. Additional work exposed two online PDF makers that leaked tens of thousands of user documents, and uncovered more than one million publicly exposed secrets from 58,000 websites’ .env configuration files.

In other significant findings, Cybernews revealed that Football Australia leaked secret keys granting access to 127 data buckets. In collaboration with cybersecurity researcher Bob Dyachenko, the team also discovered a 12-terabyte data leak comprising more than 26 billion records, one of the largest known exposures of its kind.

Share this

Himani Verma

Content Contributor

Himani Verma is a seasoned content writer and SEO expert, with experience in digital media. She has held various senior writing positions at enterprises like CloudTDMS (Synthetic Data Factory), Barrownz Group, and ATZA. Himani has also been Editorial Writer at Hindustan Time, a leading Indian English language news platform. She excels in content creation, proofreading, and editing, ensuring that every piece is polished and impactful. Her expertise in crafting SEO-friendly content for multiple verticals of businesses, including technology, healthcare, finance, sports, innovation, and more.