Abstract:
To address the security threats arising from the high complexity and cross-layer coupling of Artificial Intelligence (AI) systems, a quantitative analysis method for cross-layer security vulnerabilities in AI systems was proposed. firstly, a seven-layer technical architecture was established covering hardware, network, basic software, data, model algorithms, intelligent agents, and application layers, and systematically identifies vulnerabilities at each layer by integrating vulnerability database data. A coupled modeling system, comprising the Threat Conduction Probability Model (TCPM), Risk Diffusion Intensity Model (RDIM), and Defense Cost Effectiveness Model (DCEM), was developed to quantify the probability of threat propagation across those layers, evaluate the risk diffusion intensity along attack chains, and measure the cost-effectiveness of various defense measures. Statistical analysis of 1 841 AI-related vulnerabilities reveals that medium-to-high severity vulnerabilities account for 65.2% of the total. A case study on the NVIDIA Container Toolkit vulnerability (CVE-2024-0132) demonstrates that the probability of this vulnerability propagating from the basic software layer upward exceeds 0.96 at each stage. Following progressive across-layer amplification, the final risk intensity at the application layer reaches 0.922, categorized as a critical risk. Among three defense measures, patching and upgrading the container at the source achieves the best cost-effectiveness ratio of 0.052. The conclusion indicates that this modeling system effectively characterizes the risk amplification effects and identifies critical defense nodes in AI systems, providing a quantitative basis for formulating precise and economical security strategies.