基于SSA-LSTM-SHAP的桥梁缆索腐蚀钢丝疲劳寿命智能预测

INTELLIGENT PREDICTION OF FATIGUE LIFE FOR CORRODED STEEL WIRES IN BRIDGE CABLES BASED ON SSA-LSTM-SHAP

  • 摘要: 桥梁缆索是大跨缆索承重桥梁的关键受力部分,长期在复杂环境下易受腐蚀疲劳损伤,其性能退化会直接影响桥梁的安全和使用寿命。早期试验研究主要关注单一因素如腐蚀速率、应力比等,未考虑多个变量之间的复杂相互作用。同时,现有数据驱动模型预测性能和收敛速度有限,且模型解释性不足。鉴于此,本研究建立了混合机器学习SSA-LSTM-SHAP模型,该方法使用SSA-LSTM方法映射疲劳性能特征参数和疲劳寿命之间的非线性复杂关系,并结合SHAP技术,以增强模型的快速收敛性和可解释性。通过对桥梁缆索腐蚀钢丝疲劳性能相关文献数据的整合,构建了一个包含多种工况参数的疲劳寿命数据库,并在此基础上对模型的预测性能进行了系统的验证。结果表明,所提出的SSA-LSTM模型实现了较高的预测精度和较强的泛化能力。同时,通过引入SHAP方法解决了SSA-LSTM模型的黑箱特性问题。其SHAP可解释性揭示了输入参数对模型预测结果的具体贡献程度,为特征工程优化提供了理论依据。

     

    Abstract: Bridge cables are critical load-bearing components of long-span cable-supported bridges. They are susceptible to corrosion-fatigue damage under a long-term exposure for complex environments, and their performance degradation directly affects the bridge safety and service life. Early experimental studies primarily focused on single factors, such as corrosion rate or stress ratio, often neglecting the complex interactions among multiple variables. Meanwhile, existing data-driven models exhibit limitations in prediction performance and convergence speed, along with insufficient interpretability. To address these gaps, this study establishes a hybrid machine learning SSA-LSTM-SHAP model. This approach employs the SSA-LSTM method to map the nonlinear complex relationship between fatigue performance characteristic parameters and fatigue life, while integrating SHAP technology to enhance the model convergence speed and interpretability. By integrating fatigue performance data of corroded bridge cable wires from pertinent literatures, a fatigue life database encompassing various operational parameters was constructed. The model's predictive performance was systematically validated upon this database. The study results indicate that the proposed SSA-LSTM model achieves high prediction accuracy and strong generalization capability. Furthermore, the introduction of the SHAP method addresses the "black box" nature of the SSA-LSTM model. SHAP-based interpretability reveals the specific contribution level of each input parameter to the model's predictions, providing a theoretical basis for feature engineering optimization.

     

/

返回文章
返回