10 September 2025, Geneva
Organizations across critical sectors are building operational and digital infrastructures on opaque, ungoverned foundations. According to a recent industry survey by Kiteworks:
Nearly half of all enterprises do not know how many third parties they rely on
Only 17% have implemented any form of AI governance or controls
One in four organizations experiences seven or more breaches annually
These statistics highlight a dangerous new reality: vulnerabilities are now being inherited and amplified across entire digital ecosystems. We are facing a risk landscape defined by what we can't see.
Modern enterprises are interconnected webs of vendors, subcontractors, and cloud services. Most of these exposure points are poorly mapped and unmonitored, creating an invisible attack surface.
The adoption of AI is outpacing the ability to manage its risks. From automated decision engines to generative tools, models are being deployed faster than organizations can understand their implications or regulate their behavior.
The convergence of unmonitored third-party supply chains and opaque AI models is uniquely dangerous. This combination amplifies failures, making them unpredictable and difficult to detect.
This is a strategic shift in the architecture of risk. It’s no longer about a single vulnerability but about the structural fragility of our interconnected systems.
The combination of undocumented supply chains and opaque AI models results in a system primed for silent failure. Exploitation may occur without alert, attribution, or containment, and organizations cannot defend what they cannot see.
As dependencies multiply and regulation lags behind AI deployment, institutions are left structurally exposed. Each unseen vendor or untested model becomes a node of potential failure, with repercussions across entire ecosystems.
Breach fatigue is only part of the problem. As digital systems become more complex and less transparent, public trust in institutions continues to degrade. This is not just a technical issue; it is a governance and legitimacy crisis in the making.
Mandate transparency in third-party digital relationships
Establish minimum AI governance standards
Treat AI-integrated supply chains as part of national digital sovereignty
Conduct visibility audits to map all active third-party services and AI deployments
Shift away from compliance-driven checklists and implement resilience-driven design
Invest in cross-functional AI risk governance boards and ensure they are empowered with real authority
Ensure attack surface includes all AI models and vendor ecosystems
Implement pre-mortem testing for AI failures and vendor compromise
Build trust modeling into third-party selection processes
High Dependency, Low Visibility
This sector is critically reliant on third-party control systems, including SCADA vendors, remote monitoring platforms, and cloud infrastructure. Yet most organizations lack end-to-end visibility into these external systems, creating blind spots that attackers can exploit and operators cannot quickly diagnose.
Accelerated AI Adoption Without Governance
Pharmaceutical firms are rapidly integrating AI into their R&D and operational workflows. AI is being integrated into everything from the discovery of molecules to supply chain optimization. However, this expansion often outpaces governance, which can leave AI models poorly validated, insufficiently monitored, and vulnerable to misuse or malfunction.
Hyper-Connected, Under-Governed
Tech companies operate in highly interconnected digital ecosystems with layered partnerships, APIs, and open-source dependencies. This interconnection can fuel innovation and foster invention, but the maturity of governance often lags behind. This can leave behind a system of fragmented oversight, weak third-party risk frameworks, and a high rate of inherited vulnerabilities.
AI-Driven Decisions Without Transparency
The financial sector is increasingly using AI for underwriting, fraud detection, and client decisioning. Yet many institutions lack clear audit trails or interpretability mechanisms, which can be cause for concerns about bias, systemic errors, and regulatory non-compliance in AI-led decision systems.
Organizations are operating within ecosystems they don't control, powered by AI they don't understand, and exposed through vendors they can't see. This is a profound vulnerability impacting resilience, governance, and national sovereignty.
"The primary strategic risk now comes from a lack of transparency within complex, interconnected systems," says Dr. Dave Venable, Chairman of the ISRS. "We're not just fighting a visible enemy; we're struggling to manage risks we can't fully see."
Mitigating this cascade of risk will require a fundamental shift in our approach to cybersecurity. It demands a move from reactive controls to proactive visibility, from isolated compliance to integrated oversight, and from chasing incidents to building trustable systems by design.
Prepared by:
ISRS Strategic Advisory & Risk Analysis Unit
Geneva, Switzerland
About ISRS
The Institute for Strategic Risk and Security (ISRS) is an independent, non-profit NGO focusing on global risk and security.
Copyright (c) 2025, Institute for Strategic Risk and Security