Course Code: secaimod
Duration: 14 hours
Prerequisites:
  • 了解機器學習工作流程和模型訓練
  • 具備Python和常見ML框架(如PyTorch或TensorFlow)的經驗
  • 熟悉基本安全或威脅建模概念會有所幫助

目標受眾

  • 機器學習工程師
  • 網絡安全分析師
  • AI研究人員和模型驗證團隊
Overview:

保护AI模型是一门防御机器学习系统免受特定威胁的学科,这些威胁包括对抗性输入、数据投毒、反转攻击和隐私泄露。

本课程为讲师引导的培训(线上或线下),面向中级机器学习与网络安全专业人士,旨在帮助他们理解并缓解针对AI模型的新兴威胁,使用概念框架与实操防御手段,如鲁棒训练与差分隐私。

通过本培训,参与者将能够:

  • 识别并分类AI特定威胁,如对抗性攻击、反转和投毒。
  • 使用Adversarial Robustness Toolbox (ART)等工具模拟攻击并测试模型。
  • 应用实际防御手段,包括对抗性训练、噪声注入和隐私保护技术。
  • 在生产环境中设计威胁感知的模型评估策略。

课程形式

  • 互动讲座与讨论。
  • 大量练习与实践。
  • 在实时实验室环境中进行实操实施。

课程定制选项

  • 如需为本课程定制培训,请联系我们安排。
Course Outline:

AI威胁建模简介

  • AI系统为何容易受到攻击?
  • AI攻击面与传统系统的对比
  • 关键攻击向量:数据、模型、输出和接口层

针对AI模型的对抗攻击

  • 理解对抗样本和扰动技术
  • 白盒攻击与黑盒攻击
  • FGSM、PGD和DeepFool方法
  • 可视化和制作对抗样本

模型反演与隐私泄露

  • 从模型输出推断训练数据
  • 成员推断攻击
  • 分类和生成模型中的隐私风险

数据投毒与后门注入

  • 投毒数据如何影响模型行为
  • 基于触发的后门和木马攻击
  • 检测与净化策略

鲁棒性与防御技术

  • 对抗训练与数据增强
  • 梯度掩码与输入预处理
  • 模型平滑与正则化技术

隐私保护AI防御

  • 差分隐私简介
  • 噪声注入与隐私预算
  • 联邦学习与安全聚合

AI Security实践

  • 威胁感知的模型评估与部署
  • 在应用场景中使用ART(对抗鲁棒性工具箱)
  • 行业案例研究:真实世界的数据泄露与缓解措施

总结与下一步

Sites Published:

United Arab Emirates - Securing AI Models: Threats, Attacks, and Defenses

Qatar - Securing AI Models: Threats, Attacks, and Defenses

Egypt - Securing AI Models: Threats, Attacks, and Defenses

Saudi Arabia - Securing AI Models: Threats, Attacks, and Defenses

South Africa - Securing AI Models: Threats, Attacks, and Defenses

Brasil - Securing AI Models: Threats, Attacks, and Defenses

Canada - Securing AI Models: Threats, Attacks, and Defenses

中国 - Securing AI Models: Threats, Attacks, and Defenses

香港 - Securing AI Models: Threats, Attacks, and Defenses

澳門 - Securing AI Models: Threats, Attacks, and Defenses

台灣 - Securing AI Models: Threats, Attacks, and Defenses

USA - Securing AI Models: Threats, Attacks, and Defenses

Österreich - Securing AI Models: Threats, Attacks, and Defenses

Schweiz - Securing AI Models: Threats, Attacks, and Defenses

Deutschland - Securing AI Models: Threats, Attacks, and Defenses

Czech Republic - Securing AI Models: Threats, Attacks, and Defenses

Denmark - Securing AI Models: Threats, Attacks, and Defenses

Estonia - Securing AI Models: Threats, Attacks, and Defenses

Finland - Securing AI Models: Threats, Attacks, and Defenses

Greece - Securing AI Models: Threats, Attacks, and Defenses

Magyarország - Securing AI Models: Threats, Attacks, and Defenses

Ireland - Securing AI Models: Threats, Attacks, and Defenses

Luxembourg - Securing AI Models: Threats, Attacks, and Defenses

Latvia - Securing AI Models: Threats, Attacks, and Defenses

España - Securing AI Models: Threats, Attacks, and Defenses

Italia - Securing AI Models: Threats, Attacks, and Defenses

Lithuania - Securing AI Models: Threats, Attacks, and Defenses

Nederland - Securing AI Models: Threats, Attacks, and Defenses

Norway - Securing AI Models: Threats, Attacks, and Defenses

Portugal - Securing AI Models: Threats, Attacks, and Defenses

România - Securing AI Models: Threats, Attacks, and Defenses

Sverige - Securing AI Models: Threats, Attacks, and Defenses

Türkiye - Securing AI Models: Threats, Attacks, and Defenses

Malta - Securing AI Models: Threats, Attacks, and Defenses

Belgique - Securing AI Models: Threats, Attacks, and Defenses

France - Securing AI Models: Threats, Attacks, and Defenses

日本 - Securing AI Models: Threats, Attacks, and Defenses

Australia - Securing AI Models: Threats, Attacks, and Defenses

Malaysia - Securing AI Models: Threats, Attacks, and Defenses

New Zealand - Securing AI Models: Threats, Attacks, and Defenses

Philippines - Securing AI Models: Threats, Attacks, and Defenses

Singapore - Securing AI Models: Threats, Attacks, and Defenses

Thailand - Securing AI Models: Threats, Attacks, and Defenses

Vietnam - Securing AI Models: Threats, Attacks, and Defenses

India - Securing AI Models: Threats, Attacks, and Defenses

Argentina - Securing AI Models: Threats, Attacks, and Defenses

Chile - Securing AI Models: Threats, Attacks, and Defenses

Costa Rica - Securing AI Models: Threats, Attacks, and Defenses

Ecuador - Securing AI Models: Threats, Attacks, and Defenses

Guatemala - Securing AI Models: Threats, Attacks, and Defenses

Colombia - Securing AI Models: Threats, Attacks, and Defenses

México - Securing AI Models: Threats, Attacks, and Defenses

Panama - Securing AI Models: Threats, Attacks, and Defenses

Peru - Securing AI Models: Threats, Attacks, and Defenses

Uruguay - Securing AI Models: Threats, Attacks, and Defenses

Venezuela - Securing AI Models: Threats, Attacks, and Defenses

Polska - Securing AI Models: Threats, Attacks, and Defenses

United Kingdom - Securing AI Models: Threats, Attacks, and Defenses

South Korea - Securing AI Models: Threats, Attacks, and Defenses

Pakistan - Securing AI Models: Threats, Attacks, and Defenses

Sri Lanka - Securing AI Models: Threats, Attacks, and Defenses

Bulgaria - Securing AI Models: Threats, Attacks, and Defenses

Bolivia - Securing AI Models: Threats, Attacks, and Defenses

Indonesia - Securing AI Models: Threats, Attacks, and Defenses

Kazakhstan - Securing AI Models: Threats, Attacks, and Defenses

Moldova - Securing AI Models: Threats, Attacks, and Defenses

Morocco - Securing AI Models: Threats, Attacks, and Defenses

Tunisia - Securing AI Models: Threats, Attacks, and Defenses

Kuwait - Securing AI Models: Threats, Attacks, and Defenses

Oman - Securing AI Models: Threats, Attacks, and Defenses

Slovakia - Securing AI Models: Threats, Attacks, and Defenses

Kenya - Securing AI Models: Threats, Attacks, and Defenses

Nigeria - Securing AI Models: Threats, Attacks, and Defenses

Botswana - Securing AI Models: Threats, Attacks, and Defenses

Slovenia - Securing AI Models: Threats, Attacks, and Defenses

Croatia - Securing AI Models: Threats, Attacks, and Defenses

Serbia - Securing AI Models: Threats, Attacks, and Defenses

Bhutan - Securing AI Models: Threats, Attacks, and Defenses

Nepal - Securing AI Models: Threats, Attacks, and Defenses

Uzbekistan - Securing AI Models: Threats, Attacks, and Defenses