- 对机器学习和深度学习架构的理解
- 具备Python和ML框架(如TensorFlow、PyTorch)的经验
- 熟悉网络安全概念或攻击性安全技术
受众
- 安全研究人员
- 攻击性安全团队
- AI保证和红队专业人员
Red Teaming AI Systems 是一个专门的攻击性安全领域,专注于通过对抗性测试和压力模拟来识别机器学习模型和部署管道中的弱点。
这是一个由讲师主导的培训(线上或线下),针对高级安全专业人员和机器学习专家,他们希望模拟对AI系统的攻击,发现漏洞,并增强已部署AI模型的鲁棒性。
在培训结束时,参与者将能够:
- 模拟对机器学习模型的现实威胁。
- 生成对抗性样本来测试模型的鲁棒性。
- 评估AI API和管道的攻击面。
- 为AI部署环境设计红队策略。
课程形式
- 互动式讲座和讨论。
- 大量练习和实践。
- 在实时实验室环境中进行动手实践。
课程定制选项
- 如需为本课程定制培训,请联系我们进行安排。
AI 红队介绍
- 理解 AI 威胁格局
- 红队在 AI 安全中的角色
- 道德与法律考量
对抗性 Machine Learning
- 攻击类型:逃避、毒化、提取、推论
- 生成对抗性示例(例如 FGSM、PGD)
- 目标与非目标攻击及成功指标
测试模型稳健性
- 评估在扰动下的稳健性
- 探索模型盲点与失败模式
- 压力测试分类、视觉与 NLP 模型
AI 管道的红队测试
- AI 管道的攻击面:数据、模型、部署
- 利用不安全的模型 API 和端点
- 逆向工程模型行为与输出
模拟与工具
- 使用对抗性稳健性工具箱(ART)
- 使用 TextAttack 和 IBM ART 等工具进行红队测试
- 沙盒、监控与可观测性工具
AI 红队策略与防御 Collaboration
- 制定红队演习与目标
- 向蓝队传达发现结果
- 将红队测试整合到 AI 风险管理中
总结与下一步
United Arab Emirates - Red Teaming AI Systems: Offensive Security for ML Models
Qatar - Red Teaming AI Systems: Offensive Security for ML Models
Egypt - Red Teaming AI Systems: Offensive Security for ML Models
Saudi Arabia - Red Teaming AI Systems: Offensive Security for ML Models
South Africa - Red Teaming AI Systems: Offensive Security for ML Models
Brasil - Red Teaming AI Systems: Offensive Security for ML Models
Canada - Red Teaming AI Systems: Offensive Security for ML Models
中国 - Red Teaming AI Systems: Offensive Security for ML Models
香港 - Red Teaming AI Systems: Offensive Security for ML Models
澳門 - Red Teaming AI Systems: Offensive Security for ML Models
台灣 - Red Teaming AI Systems: Offensive Security for ML Models
USA - Red Teaming AI Systems: Offensive Security for ML Models
Österreich - Red Teaming AI Systems: Offensive Security for ML Models
Schweiz - Red Teaming AI Systems: Offensive Security for ML Models
Deutschland - Red Teaming AI Systems: Offensive Security for ML Models
Czech Republic - Red Teaming AI Systems: Offensive Security for ML Models
Denmark - Red Teaming AI Systems: Offensive Security for ML Models
Estonia - Red Teaming AI Systems: Offensive Security for ML Models
Finland - Red Teaming AI Systems: Offensive Security for ML Models
Greece - Red Teaming AI Systems: Offensive Security for ML Models
Magyarország - Red Teaming AI Systems: Offensive Security for ML Models
Ireland - Red Teaming AI Systems: Offensive Security for ML Models
Luxembourg - Red Teaming AI Systems: Offensive Security for ML Models
Latvia - Red Teaming AI Systems: Offensive Security for ML Models
España - Red Teaming AI Systems: Offensive Security for ML Models
Italia - Red Teaming AI Systems: Offensive Security for ML Models
Lithuania - Red Teaming AI Systems: Offensive Security for ML Models
Nederland - Red Teaming AI Systems: Offensive Security for ML Models
Norway - Red Teaming AI Systems: Offensive Security for ML Models
Portugal - Red Teaming AI Systems: Offensive Security for ML Models
România - Red Teaming AI Systems: Offensive Security for ML Models
Sverige - Red Teaming AI Systems: Offensive Security for ML Models
Türkiye - Red Teaming AI Systems: Offensive Security for ML Models
Malta - Red Teaming AI Systems: Offensive Security for ML Models
Belgique - Red Teaming AI Systems: Offensive Security for ML Models
France - Red Teaming AI Systems: Offensive Security for ML Models
日本 - Red Teaming AI Systems: Offensive Security for ML Models
Australia - Red Teaming AI Systems: Offensive Security for ML Models
Malaysia - Red Teaming AI Systems: Offensive Security for ML Models
New Zealand - Red Teaming AI Systems: Offensive Security for ML Models
Philippines - Red Teaming AI Systems: Offensive Security for ML Models
Singapore - Red Teaming AI Systems: Offensive Security for ML Models
Thailand - Red Teaming AI Systems: Offensive Security for ML Models
Vietnam - Red Teaming AI Systems: Offensive Security for ML Models
India - Red Teaming AI Systems: Offensive Security for ML Models
Argentina - Red Teaming AI Systems: Offensive Security for ML Models
Chile - Red Teaming AI Systems: Offensive Security for ML Models
Costa Rica - Red Teaming AI Systems: Offensive Security for ML Models
Ecuador - Red Teaming AI Systems: Offensive Security for ML Models
Guatemala - Red Teaming AI Systems: Offensive Security for ML Models
Colombia - Red Teaming AI Systems: Offensive Security for ML Models
México - Red Teaming AI Systems: Offensive Security for ML Models
Panama - Red Teaming AI Systems: Offensive Security for ML Models
Peru - Red Teaming AI Systems: Offensive Security for ML Models
Uruguay - Red Teaming AI Systems: Offensive Security for ML Models
Venezuela - Red Teaming AI Systems: Offensive Security for ML Models
Polska - Red Teaming AI Systems: Offensive Security for ML Models
United Kingdom - Red Teaming AI Systems: Offensive Security for ML Models
South Korea - Red Teaming AI Systems: Offensive Security for ML Models
Pakistan - Red Teaming AI Systems: Offensive Security for ML Models
Sri Lanka - Red Teaming AI Systems: Offensive Security for ML Models
Bulgaria - Red Teaming AI Systems: Offensive Security for ML Models
Bolivia - Red Teaming AI Systems: Offensive Security for ML Models
Indonesia - Red Teaming AI Systems: Offensive Security for ML Models
Kazakhstan - Red Teaming AI Systems: Offensive Security for ML Models
Moldova - Red Teaming AI Systems: Offensive Security for ML Models
Morocco - Red Teaming AI Systems: Offensive Security for ML Models
Tunisia - Red Teaming AI Systems: Offensive Security for ML Models
Kuwait - Red Teaming AI Systems: Offensive Security for ML Models
Oman - Red Teaming AI Systems: Offensive Security for ML Models
Slovakia - Red Teaming AI Systems: Offensive Security for ML Models
Kenya - Red Teaming AI Systems: Offensive Security for ML Models
Nigeria - Red Teaming AI Systems: Offensive Security for ML Models
Botswana - Red Teaming AI Systems: Offensive Security for ML Models
Slovenia - Red Teaming AI Systems: Offensive Security for ML Models
Croatia - Red Teaming AI Systems: Offensive Security for ML Models
Serbia - Red Teaming AI Systems: Offensive Security for ML Models
Bhutan - Red Teaming AI Systems: Offensive Security for ML Models
Nepal - Red Teaming AI Systems: Offensive Security for ML Models
Uzbekistan - Red Teaming AI Systems: Offensive Security for ML Models