More Parties, More Risks, More Opportunity? Evolving Governance to Support Cyber Resilience Amidst Evolving Policy and Technological Change
*Special thanks to Jim Siegl and Jocelyn Aqua for their advice and expertise.
Summary: Artificial Intelligence (AI) presents fundamental opportunities and challenges for defense of increasingly complex digital ecosystems amid rising attack costs, fragmented regulation, and evolving industry practices. A coordinated response across the public and private sectors, including smart deployment of AI tools for risk detection and defense, is critical to building resilient AI systems and securing supply chains. This article describes emerging risks, identifies regulations and governance frameworks relevant to addressing them, and proposes governance steps that organizations can take to improve supply chain resilience.
In recent years, third-party and supply chain cybersecurity attacks have become one of the most significant risks to national and organizational security. The 2020 SolarWinds breach demonstrated how integrated environments built on shared code, automated updates, and implicit trust in upstream vendors can allow a single vendor breach to cascade across agencies and enterprises. That incident granted foreign adversaries unauthorized access to more than 200 public and private organizations, including the Departments of Homeland Security, Treasury, and Commerce. Although the U.S. Securities and Exchange Commission (SEC) ultimately dismissed the SEC’s civil enforcement action against SolarWinds, this incident illustrates how an attack on one trusted software provider can lead to system-wide failures. In 2023, PyTorch, an open-source artificial intelligence/machine learning (AI/ML) framework, was injected with malware following a supply chain attack. In 2024, the XZ Utils backdoor illustrated how a single vulnerability in a trusted open-source library can compromise the build process and enable remote code execution across countless systems.
The threat became more pronounced in 2025. Approximately 30% of cybersecurity breaches last year originated from third-party relationships – double the percentage from just two years earlier. This rise tracks closely with increased reliance on external vendors, cloud platforms, model providers, and open-source components. While the growth of these interconnected supply chains can yield efficiencies and service improvements and accelerate innovation, they can also multiply the number of attack surfaces that bad actors can exploit.
Over several years, FPF has been exploring the ways that AI can accentuate security risks, while also creating new detection and defense capabilities. The recent announcement of Project Glasswing put a spotlight on the presence of both opportunity and risk as AI technologies rapidly evolve. Autonomous and agentic systems, add new layers of complexity and risk – as well as opportunities to more effectively detect, combat and mitigate those risks. Unlike traditional software, agentic AI systems may ingest external data, reuse pretrained models, and act across organizational boundaries with limited human intervention, which introduces or exacerbates distinct vulnerabilities. These risks intersect with traditional cybersecurity concerns but require new or expanded governance mechanisms around data provenance, model integrity, and automated decision-making.
Emerging Risks in AI-Enabled Supply Chains
Organizations must navigate an evolving industry landscape while managing an interconnected network of vendors, cloud services, and open-source components, creating systemic risk from a single compromised dependency that can cascade across operations.
Risks and Opportunities from Third-Party Components and Systems
Third-party software libraries, datasets, and cloud infrastructure can yield enormous value for organizations, including for risk management and cyber defense. At the same time, these tools can introduce vulnerabilities that are difficult to detect or control. In AI ecosystems, dependency chains are often deeper and less transparent than in traditional software systems, encompassing not just code, but models, training data and pre-trained weights. The proliferation of new AI-driven technologies and services, particularly those that involve agents, amplifies these risks. Once deployed, these agentic AI systems can act independently and potentially bypass traditional security controls.
Amplified Risk by AI Systems
AI systems and plugins can introduce new or exacerbate established cyber attack methods. These techniques exploit the model’s reliance on data and user input to manipulate system behavior or extract sensitive information. Specific examples include:
- Data and model poisoning through compromised training data or dependency libraries that alter model behavior at scale;
- Prompt injection attacks where malicious inputs manipulate model outputs or downstream actions without altering underlying infrastructure;
- Model supply chain hijacking via malicious model weights or corrupted open-source components;
- Autonomous agent exploits, where AI agents interact with external systems or application programming interfaces (APIs) using delegated credentials, tool access, or persistent permissions without sufficient guardrails; or
- Cross-system interdependency, when a compromise in one model, tool, or plugin spreads across an entire interconnected ecosystem.
Agentic AI systems introduce a distinct risk profile characterized by autonomy, multi-step decision-making, and the ability to take actions in external environments. Rather than producing static outputs in response to bounded inputs, these systems can plan, iterate, and take actions across external environments using delegated tools and credentials. This shift effectively extends the operational boundary of the system to include external services, APIs, and data sources in real time. As a result, risk is no longer confined to model performance or data integrity, but includes the downstream effects of autonomous decision-making and execution across interconnected systems.
These risks are amplified in environments where agents operate with persistent credentials or broad API access. In such contexts, a single compromised interaction can propagate across systems, particularly when agents are designed to optimize for task completion without sufficiently robust constraints on permissible actions. The resulting behavior may be difficult to predict or audit, as it emerges from the interaction between model outputs, tool responses, and external system states rather than from a single deterministic process.
As organizations deploy agentic AI, institutional decisionmaking can risk becoming more distributed and opaque. Agents may interact autonomously with external systems, exacerbating cybersecurity risks such as propagation of incorrect or malicious instructions across the supply chain, extraction of confidential data, and escalation-of-privilege scenarios (if access controls are misconfigured). The autonomy of agents may require new or evolved forms of oversight, logging, and training.
AI Governance and Accountability
Technical controls alone are insufficient to mitigate AI-specific supply chain risks. Effective enterprise cybersecurity requires active leadership oversight and a culture of accountability. Executives must move beyond a “baseline understanding” and toward a risk-aware mindset where cybersecurity training is tailored to AI specific industry roles and threat models. Company policies and protocols should incorporate this understanding. Human governance is essential to assess and enforce organizational standards.
Applicable Regulations and Governance Frameworks
In the absence of a single statutory framework that governs the intersection of AI and cybersecurity, federal and state agencies have developed a range of guidelines, voluntary frameworks, certifications, and procurement requirements that seek to address growing cyber and AI governance risks.
Security Guidance from the Federal Government
Several federal frameworks provide relevant guidance for companies around third-party and supply chain cyber risk:
- National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) and NIST Special Publications (SPs) 800-171 and 800-161: Offers detailed technical guidance for supply chain risk management (SCRM), with emphasis risk assessments, dependency mapping, continuous monitoring, and vendor due diligence.
- The NIST Cybersecurity Framework is a voluntary and scalable cybersecurity risk guidance. The updated CSF 2.0 includes “govern” as a key function, which embeds cybersecurity governance into enterprise risk management, aligning strategy, policy, and oversight with business objectives.
- NIST SP 800-161 provides comprehensive guidance for enterprise SCRM. It recommends a multidisciplinary governance structure, emphasizes iterative risk assessment and monitoring, and integrates risk management into procurement processes.
- Cybersecurity and Infrastructure Security Agency (CISA) Secure by Demand Guide: Provides buyers a checklist of questions to assess software manufacturers’ supply chain security practices, such as establishing secure authentication defaults, reporting vulnerabilities, and providing security logs and a software bill of materials (SBOMs).
- CISA Tabletop Exercise Packages (CTEPs) and Tips: Supports agencies and vendors in evaluating their cloud and procurement-related cybersecurity frameworks.
- CISA also offers best practices for cloud security and third-party risk management that emphasize shared responsibility models, continuous monitoring, and secure integration of AI services.
- Department of Defense’s Cybersecurity Maturity Model Certification (CMMC): Sets standards for federal contractors, including vendors supplying AI services or model components to defense agencies.
- Federal Risk and Authorization Management Program (FedRAMP): Establishes security requirements for cloud service providers, and its procurement standards now extend to AI services deployed within federal environments.
AI Guidance from the Federal Government
Federal guidance on AI-related cybersecurity continues to evolve, offering several guides for how to approach AI-related risks in supply chains:
- NIST AI Risk Management Framework (AI RMF): Provides a structured approach for assessing AI-related risks, encouraging transparency and accountability across the AI lifecycle.
- The White House AI Action Plan sets out high-level policy principles around safety, transparency, and procurement/vendor accountability, calling for stronger oversight mechanisms to ensure that AI tools integrated into supply chains are trustworthy and secure.
State Governance
States are taking an increasingly active role in regulating AI and related cybersecurity risks. In particular, California has a number of strong AI procurement and cyber requirements.
- New York Department of Financial Services (NYDFS) – 2025 Industry Guidance: Highlights the importance of incorporating AI governance into cybersecurity compliance (and noted that automation can amplify existing vulnerabilities), requiring financial institutions to evaluate AI model risks, confirm training data provenance, and assess vendor-level AI controls.
- California Privacy Protection Agency (CPPA) – 2025 Regulations: One of the first comprehensive state-level efforts to regulate AI systems and third-party data handling practices. Applicable provisions govern automated decision-making technologies (ADMT), mandatory cybersecurity audits for parties meeting certain thresholds associated with business volume and the selling and sharing of data; and and vendor accountability.
Industry Guidance
In addition to regulatory guidance and frameworks from federal and state government agencies, there are a number of industry standards and best practices that may address AI- and agent-related third-party and supply chain cybersecurity risks. Examples include:
- Open Worldwide Application Security Project (OWASP) GenAI Security Project – CheatSheet – A Practical Guide for Securely Using Third-Party MCP Servers 1.0: Provides a framework for companies and developers using a third-party Model Context Protocol (MCP). Along with mapping out common threat types, this cheat sheet provides actionable controls and workflows, such as strong authentication processes, sandboxed environments, and validation measures (e.g., establishing a “trusted MCP registry” and instituting periodic audits).
- SysAdmin, Audit, Network, and Security (SANS) Institute – Critical AI Security Guidelines: Provides a practitioner-oriented framework to help organizations build, deploy, and operate secure AI systems. Recommends developing strict access or authentication controls, safe deployment strategies (e.g., sandboxing or red-teaming), risk-based deployment, and regular data sanitization and validation.
- Snowflake – AI Security Framework: Develops a threat taxonomy of security and privacy risks specific to AI systems to help cross-discipline teams evaluate AI risk in a systematic way. The framework also provides mitigation strategies to address listed risks, though specific implementation would depend on the architecture, environment, and threat model.
- Massachusetts Institute of Technology (MIT) AI Risk Initiative – Mapping Frameworks at the Intersection of AI Safety and Traditional Risk Management: Although this analysis does not provide specific risk mitigation strategies, it provides an overview of almost a dozen AI risk management frameworks that sit “at the intersection of traditional risk management and AI safety” (with a particular emphasis on frontier, general-purpose, or “high-risk” AI systems). The MIT initiative could serve as a starting point for companies who want to ground their AI risk-management in proven safety or risk frameworks.
Across the public and the private sector, guidance on third-party and AI-related cyber risk is converging around core principles of transparency, accountability, and continuous oversight and governance. Federal frameworks have established baseline expectations for secure procurement and vendor management, while states are advancing more specific AI governance requirements. Industry standards can complement these efforts by offering practical controls and methodologies for implementing secure and responsible AI practices. Collectively, these frameworks underscore the need for organizations to adopt an integrated, risk-based approach to managing third-party and AI supply-chain security.
Recommendations and Next Steps
To strengthen AI-driven supply chain resilience, organizations should prioritize:
- AI Models and Agents Monitoring: Establish passive AI agent monitoring, then consider moving toward active “guardrails” to intercept and block anomalous agent actions, cross-system API calls, or unauthorized data exfiltration in real-time.
- Provenance for Third-Party AI Models Requirements: Consider creating AI Bills of Materials (AI-BOM), which would mandate vendors to provide a standardized AI-BOM that inventories code libraries (a “Software Bill of Materials” or SBOM), model provenance, training dataset origins, and cryptographic signatures of model weights to prevent tampering.
- AI-Specific Vendor Risk Assessments: Evaluate not only traditional cybersecurity controls but also model lineage, dataset provenance, and plugin dependencies. Consider AI-specific adversarial red-teaming (i.e., updating vendor risk assessments to include results from adversarial testing such as prompt injection and data poisoning resilience).
- Contracts and Procurement Controls: Include model security obligations, and update notification requirements and audit rights. Consider updating vendor contracts to ensure that no high-impact decision is made without a clear path for human intervention.
- Organizational Literacy: Ensure boards and executives understand AI-specific supply chain risks to enable informed oversight decisions. Elevate AI literacy beyond the IT department. Form a committee of legal, security, and business leaders to define the organization’s risk appetite for third-party AI dependencies and agentic autonomy.
Conclusion
The accelerating convergence of AI adoption, complex vendor ecosystems, and increasingly sophisticated cyber threats has elevated third-party and supply-chain security to a critical strategic priority for industry leadership. Recent incidents and rising breach rates demonstrate that traditional governance models must evolve for environments characterized by autonomous systems, complex dependency chains, and cross-system interdependencies. Both the private and public sector are responding with increasingly aligned expectations that emphasize transparency, accountability, and continuous monitoring across the AI lifecycle and vendor ecosystem.
For organizations, the imperative is to move beyond fragmented or compliance-only approaches and adopt an integrated, risk-based governance model that unifies traditional cybersecurity controls with AI-specific safeguards and robust oversight. Businesses that strengthen vendor accountability, implement continuous model monitoring, and invest in organizational education will be best positioned to mitigate systemic risks, realize new opportunities to strengthen defenses, maintain operational resilience, and meet evolving regulatory obligations.
For questions about FPF membership or our ongoing work related to the topics discussed in this blog, please contact info@org.