- 对机器学习和深度学习架构的理解
- 具备Python和ML框架(如TensorFlow、PyTorch)的经验
- 熟悉网络安全概念或攻击性安全技术
受众
- 安全研究人员
- 攻击性安全团队
- AI保证和红队专业人员
Red Teaming AI Systems 是一個專門的攻擊性安全領域,專注於通過對抗性測試和壓力模擬來識別機器學習模型和部署管道中的弱點。
這是一個由講師主導的培訓(線上或線下),針對高級安全專業人員和機器學習專家,他們希望模擬對AI系統的攻擊,發現漏洞,並增強已部署AI模型的魯棒性。
在培訓結束時,參與者將能夠:
- 模擬對機器學習模型的現實威脅。
- 生成對抗性樣本來測試模型的魯棒性。
- 評估AI API和管道的攻擊面。
- 為AI部署環境設計紅隊策略。
課程形式
- 互動式講座和討論。
- 大量練習和實踐。
- 在實時實驗室環境中進行動手實踐。
課程定制選項
- 如需為本課程定制培訓,請聯繫我們進行安排。
AI 紅隊介紹
- 理解 AI 威脅格局
- 紅隊在 AI 安全中的角色
- 道德與法律考量
對抗性 Machine Learning
- 攻擊類型:逃避、毒化、提取、推論
- 生成對抗性示例(例如 FGSM、PGD)
- 目標與非目標攻擊及成功指標
測試模型穩健性
- 評估在擾動下的穩健性
- 探索模型盲點與失敗模式
- 壓力測試分類、視覺與 NLP 模型
AI 管道的紅隊測試
- AI 管道的攻擊面:數據、模型、部署
- 利用不安全的模型 API 和端點
- 逆向工程模型行為與輸出
模擬與工具
- 使用對抗性穩健性工具箱(ART)
- 使用 TextAttack 和 IBM ART 等工具進行紅隊測試
- 沙盒、監控與可觀測性工具
AI 紅隊策略與防禦 Collaboration
- 制定紅隊演習與目標
- 向藍隊傳達發現結果
- 將紅隊測試整合到 AI 風險管理中
總結與下一步
United Arab Emirates - Red Teaming AI Systems: Offensive Security for ML Models
Qatar - Red Teaming AI Systems: Offensive Security for ML Models
Egypt - Red Teaming AI Systems: Offensive Security for ML Models
Saudi Arabia - Red Teaming AI Systems: Offensive Security for ML Models
South Africa - Red Teaming AI Systems: Offensive Security for ML Models
Brasil - Red Teaming AI Systems: Offensive Security for ML Models
Canada - Red Teaming AI Systems: Offensive Security for ML Models
中国 - Red Teaming AI Systems: Offensive Security for ML Models
香港 - Red Teaming AI Systems: Offensive Security for ML Models
澳門 - Red Teaming AI Systems: Offensive Security for ML Models
台灣 - Red Teaming AI Systems: Offensive Security for ML Models
USA - Red Teaming AI Systems: Offensive Security for ML Models
Österreich - Red Teaming AI Systems: Offensive Security for ML Models
Schweiz - Red Teaming AI Systems: Offensive Security for ML Models
Deutschland - Red Teaming AI Systems: Offensive Security for ML Models
Czech Republic - Red Teaming AI Systems: Offensive Security for ML Models
Denmark - Red Teaming AI Systems: Offensive Security for ML Models
Estonia - Red Teaming AI Systems: Offensive Security for ML Models
Finland - Red Teaming AI Systems: Offensive Security for ML Models
Greece - Red Teaming AI Systems: Offensive Security for ML Models
Magyarország - Red Teaming AI Systems: Offensive Security for ML Models
Ireland - Red Teaming AI Systems: Offensive Security for ML Models
Luxembourg - Red Teaming AI Systems: Offensive Security for ML Models
Latvia - Red Teaming AI Systems: Offensive Security for ML Models
España - Red Teaming AI Systems: Offensive Security for ML Models
Italia - Red Teaming AI Systems: Offensive Security for ML Models
Lithuania - Red Teaming AI Systems: Offensive Security for ML Models
Nederland - Red Teaming AI Systems: Offensive Security for ML Models
Norway - Red Teaming AI Systems: Offensive Security for ML Models
Portugal - Red Teaming AI Systems: Offensive Security for ML Models
România - Red Teaming AI Systems: Offensive Security for ML Models
Sverige - Red Teaming AI Systems: Offensive Security for ML Models
Türkiye - Red Teaming AI Systems: Offensive Security for ML Models
Malta - Red Teaming AI Systems: Offensive Security for ML Models
Belgique - Red Teaming AI Systems: Offensive Security for ML Models
France - Red Teaming AI Systems: Offensive Security for ML Models
日本 - Red Teaming AI Systems: Offensive Security for ML Models
Australia - Red Teaming AI Systems: Offensive Security for ML Models
Malaysia - Red Teaming AI Systems: Offensive Security for ML Models
New Zealand - Red Teaming AI Systems: Offensive Security for ML Models
Philippines - Red Teaming AI Systems: Offensive Security for ML Models
Singapore - Red Teaming AI Systems: Offensive Security for ML Models
Thailand - Red Teaming AI Systems: Offensive Security for ML Models
Vietnam - Red Teaming AI Systems: Offensive Security for ML Models
India - Red Teaming AI Systems: Offensive Security for ML Models
Argentina - Red Teaming AI Systems: Offensive Security for ML Models
Chile - Red Teaming AI Systems: Offensive Security for ML Models
Costa Rica - Red Teaming AI Systems: Offensive Security for ML Models
Ecuador - Red Teaming AI Systems: Offensive Security for ML Models
Guatemala - Red Teaming AI Systems: Offensive Security for ML Models
Colombia - Red Teaming AI Systems: Offensive Security for ML Models
México - Red Teaming AI Systems: Offensive Security for ML Models
Panama - Red Teaming AI Systems: Offensive Security for ML Models
Peru - Red Teaming AI Systems: Offensive Security for ML Models
Uruguay - Red Teaming AI Systems: Offensive Security for ML Models
Venezuela - Red Teaming AI Systems: Offensive Security for ML Models
Polska - Red Teaming AI Systems: Offensive Security for ML Models
United Kingdom - Red Teaming AI Systems: Offensive Security for ML Models
South Korea - Red Teaming AI Systems: Offensive Security for ML Models
Pakistan - Red Teaming AI Systems: Offensive Security for ML Models
Sri Lanka - Red Teaming AI Systems: Offensive Security for ML Models
Bulgaria - Red Teaming AI Systems: Offensive Security for ML Models
Bolivia - Red Teaming AI Systems: Offensive Security for ML Models
Indonesia - Red Teaming AI Systems: Offensive Security for ML Models
Kazakhstan - Red Teaming AI Systems: Offensive Security for ML Models
Moldova - Red Teaming AI Systems: Offensive Security for ML Models
Morocco - Red Teaming AI Systems: Offensive Security for ML Models
Tunisia - Red Teaming AI Systems: Offensive Security for ML Models
Kuwait - Red Teaming AI Systems: Offensive Security for ML Models
Oman - Red Teaming AI Systems: Offensive Security for ML Models
Slovakia - Red Teaming AI Systems: Offensive Security for ML Models
Kenya - Red Teaming AI Systems: Offensive Security for ML Models
Nigeria - Red Teaming AI Systems: Offensive Security for ML Models
Botswana - Red Teaming AI Systems: Offensive Security for ML Models
Slovenia - Red Teaming AI Systems: Offensive Security for ML Models
Croatia - Red Teaming AI Systems: Offensive Security for ML Models
Serbia - Red Teaming AI Systems: Offensive Security for ML Models
Bhutan - Red Teaming AI Systems: Offensive Security for ML Models
Nepal - Red Teaming AI Systems: Offensive Security for ML Models
Uzbekistan - Red Teaming AI Systems: Offensive Security for ML Models