business resources
Enterprises Face Rising ‘AI Trust Gap’ as Public Concern Shifts Toward Governance and Control
28 Nov 2025, 0:35 pm GMT
Enterprises Face Rising ‘AI Trust Gap’ as Public Concern Shifts Toward Governance and Control
A new Cybernews–nexos.ai study shows Americans fear losing control of AI more than job loss, with “Control and regulation” and “Data and privacy” ranking highest. This mirrors enterprise concerns, where shadow AI, data exposure, inaccuracy, IP risks, and compliance failures create measurable threats. Experts emphasise visibility, governance, and human oversight as essential for safe AI adoption.
A new joint study by Cybernews and nexos.ai reveals that the public’s biggest concern about artificial intelligence (AI) is not job loss but the issue of control. These findings mirror the growing challenges within enterprises, where unapproved AI tools, limited visibility, and weak governance frameworks increase anxiety around risk, compliance, and data protection.
The study evaluates public sentiment around AI from January to October 2025, analysing search trends across five categories of concern. The results provide insight into how fears about AI evolve, and why enterprises must urgently address the trust gap developing inside their organisations.
Control takes centre stage in public AI anxiety
The analysis shows that “Control and regulation” is the single largest source of public concern, recording an average interest score of 27. Closely behind is “Data and privacy” at 26. In comparison, “Job displacement and workforce impact” ranks last, despite significant global layoffs in the technology sector this year.
This shift in public sentiment confirms a wider trend: people worry less about AI taking their jobs and more about how AI technologies are governed, monitored, and controlled.
Žilvinas Girėnas, head of product at nexos.ai, explains that this mirrors internal enterprise challenges:
“Leaders are not necessarily afraid of AI itself, but rather of losing visibility into its operations. When teams adopt unapproved AI tools, companies lose track of what data is being used and where it’s going. Without visibility, you can’t manage risk or compliance,” he says.
Understanding the roots of AI anxiety
Researchers describe today’s growing unease as “AI anxiety”—a collective psychological response to AI’s rapid integration into society. It arises from the speed of technological development, the opaque nature of AI systems, and the broad social implications that follow.
A major driver of this anxiety is the lack of transparency in advanced AI models. Many systems function as “black boxes”, generating outputs without clear explanations. This opacity leads to concerns about whether humans can effectively govern systems they do not fully understand.
Data privacy is another key factor. AI models often rely on personal data collected from browsing behaviour, social media activity, and smart devices—frequently without explicit user consent. As data breaches grow more common, fears around identity theft, financial loss, and personal data misuse intensify.
In addition, the rise of highly realistic AI-generated content contributes to mistrust and what researchers call “reality apathy”, where it becomes increasingly difficult to distinguish between authentic and fabricated information. Bias in AI training data also amplifies concerns about fairness, especially in sensitive areas such as hiring or financial decision-making.
Although job displacement receives the least attention in search trends, the study notes that concerns about automation carry deeper implications. Researchers link this to “existential anxiety”, where individuals fear a loss of purpose or identity as AI assumes more cognitive tasks.
Girėnas notes that both the public and organisations face a similar challenge: “These public fears are a rational response to the ‘black box’ nature of AI today. Organizations face the same challenge: when teams don’t really understand how AI works, confidence in the technology drops, and it can slow down AI adoption. The only way to innovate safely is to build a framework of trust, and that foundation is built on total visibility into your AI ecosystem.”
Why losing control of AI is a business risk
For enterprises, the trust gap is not theoretical. It creates direct operational, financial, and legal risks.
Recent findings from McKinsey highlight several negative consequences companies experience as AI adoption increases:
Inaccuracy and reputational impact
Inaccuracy is the most common issue reported by businesses. When employees rely on unvetted “shadow AI” tools, the likelihood of inaccurate, biased, or hallucinated outputs increases. Such errors can enter customer communications, product development, or data analysis, creating reputational damage.
Cybersecurity threats
More than half of organisations (51%) actively work to manage cybersecurity risks linked to AI. Companies fear that AI tools may expose sensitive corporate information, leak confidential documents, or become a vector for malware.
Intellectual property exposure
Concerns around intellectual property (IP) are particularly high among AI-intensive organisations. Proprietary code, strategic documents, and confidential information entered into public AI tools can be absorbed into external models and potentially exposed. “AI high performers” are the group most likely to report such incidents.
Regulatory non-compliance
Without consolidated governance, enterprises struggle to ensure compliance with data protection and AI-related regulations such as GDPR or the EU AI Act. The study notes that 43% of organisations are actively working to reduce regulatory risk due to the increasing likelihood of penalties and legal consequences.
How enterprises can build trust and reduce AI anxiety
To close the trust gap and strengthen responsible adoption, nexos.ai outlines four practical governance strategies for leaders:
1. Centralise governance: Organisations should establish a unified set of rules that govern AI usage across all teams, tools, and data environments.
2. Implement human-in-the-loop processes: Critical outputs from AI should undergo human review before they influence business decisions, customer communication, or operational processes.
3. Make AI governance a C-suite responsibility: Executive leadership must prioritise AI safety and governance, ensuring alignment between innovation and risk management.
4. Prioritise visibility over restriction: Instead of banning AI tools outright, leaders should focus on understanding which tools teams use and how data flows across them. Visibility enables better compliance, risk control, and strategic planning.
About nexos.ai
nexos.ai is an all-in-one AI platform supporting secure and controlled enterprise AI adoption. Through its secure AI Workspace for employees and AI Gateway for developers, nexos.ai enables organisations to replace fragmented AI tools with a unified interface that includes built-in guardrails, comprehensive visibility, and flexible access controls. The platform integrates with leading AI models, allowing organisations to innovate while maintaining compliance and security.
Headquartered in Vilnius, Lithuania, nexos.ai is backed by Evantic Capital, Index Ventures, Creandum, Dig Ventures, and prominent angel investors including Olivier Pomel (CEO of Datadog), Sebastian Siemiatkowski (CEO of Klarna) through Flat Capital, Ilkka Paananen (CEO of Supercell), and Avishai Abrahami (CEO of Wix.com).
Share this
Himani Verma
Content Contributor
Himani Verma is a seasoned content writer and SEO expert, with experience in digital media. She has held various senior writing positions at enterprises like CloudTDMS (Synthetic Data Factory), Barrownz Group, and ATZA. Himani has also been Editorial Writer at Hindustan Time, a leading Indian English language news platform. She excels in content creation, proofreading, and editing, ensuring that every piece is polished and impactful. Her expertise in crafting SEO-friendly content for multiple verticals of businesses, including technology, healthcare, finance, sports, innovation, and more.
previous
Explore San Francisco: Your Ultimate City Guides for 2025
next
Small Ways Businesses Can Improve Their Customer Experience