Course Code: redteamingai
Duration: 14 hours
Prerequisites:
  • Zrozumienie architektur uczenia maszynowego i głębokiego uczenia
  • Doświadczenie z Python i ramami ML (np., TensorFlow, PyTorch)
  • Zapoznanie z koncepcjami cyberbezpieczeństwa lub technikami ofensywnego bezpieczeństwa

Grupa docelowa

  • Badacze bezpieczeństwa
  • Zespoły ofensywnego bezpieczeństwa
  • Specjaliści ds. zapewnienia AI i red teaming
Overview:

Red Teaming AI Systems jest specjalnym obszarem ofensywnej bezpieczeństwa, który skupia się na identyfikowaniu słabości w modelach uczenia maszynowego i ścieżkach wdrażania poprzez testowanie przeciwnika i symulacje obciążenia.

To szkolenie prowadzone przez instruktora (online lub stacjonarne) jest skierowane do zaawansowanych specjalistów ds. bezpieczeństwa i specjalistów ML, którzy chcą symulować ataki na systemy AI, odkrywać wadliwości i wzmocnić odporność wdrożonych modeli AI.

Po zakończeniu tego szkolenia uczestnicy będą w stanie:

  • Symulować zagrożenia z rzeczywistego świata dla modeli uczenia maszynowego.
  • Generować przykłady przeciwnika, aby przetestować odporność modelu.
  • Ocenić powierzchnię ataku API i ścieżek AI.
  • Projektować strategie red teaming dla środowisk wdrażania AI.

Format kursu

  • Interaktywne wykłady i dyskusje.
  • Wiele ćwiczeń i praktyki.
  • Ręczne wdrażanie w środowisku live-lab.

Opcje dostosowania kursu

  • Aby zażądać dostosowanego szkolenia dla tego kursu, skontaktuj się z nami, aby to ułożyć.
Course Outline:

Wprowadzenie do AI Red Teaming

  • Zrozumienie krajobrazu zagrożeń AI
  • Role red teams w bezpieczeństwie AI
  • Etyczne i prawne uwarunkowania

Adversarial Machine Learning

  • Typy ataków: unikanie, zatruwanie, ekstrakcja, wnioskowanie
  • Generowanie przykładów adversarialnych (np. FGSM, PGD)
  • Ataki celowane vs. niecelowane oraz metryki sukcesu

Testowanie Odporności Modeli

  • Ocena odporności na perturbacje
  • Badanie ślepych plam i trybów awarii modeli
  • Przetestowanie modeli klasyfikacji, wizji i przetwarzania języka naturalnego

Red Teaming AI Pipelines

  • Powierzchnia ataku w AI pipelines: dane, model, wdrażanie
  • Wykorzystywanie niesprawnych API i punktów końcowych modeli
  • Odwracanie inżynieryjne zachowań i wyjść modeli

Symulacja i Narzędzia

  • Używanie Adversarial Robustness Toolbox (ART)
  • Red teaming z narzędziami, takimi jak TextAttack i IBM ART
  • Narzędzia sandboxowania, monitorowania i obserwowalności

Strategia i Obrona Red Teaming AI Collaboration

  • Projektowanie ćwiczeń i celów red teamingowych
  • Komunikowanie wyników zespołom blue
  • Włączanie red teamingu do zarządzania ryzykiem AI

Podsumowanie i Krok Dalej

Sites Published:

United Arab Emirates - Red Teaming AI Systems: Offensive Security for ML Models

Qatar - Red Teaming AI Systems: Offensive Security for ML Models

Egypt - Red Teaming AI Systems: Offensive Security for ML Models

Saudi Arabia - Red Teaming AI Systems: Offensive Security for ML Models

South Africa - Red Teaming AI Systems: Offensive Security for ML Models

Brasil - Red Teaming AI Systems: Offensive Security for ML Models

Canada - Red Teaming AI Systems: Offensive Security for ML Models

中国 - Red Teaming AI Systems: Offensive Security for ML Models

香港 - Red Teaming AI Systems: Offensive Security for ML Models

澳門 - Red Teaming AI Systems: Offensive Security for ML Models

台灣 - Red Teaming AI Systems: Offensive Security for ML Models

USA - Red Teaming AI Systems: Offensive Security for ML Models

Österreich - Red Teaming AI Systems: Offensive Security for ML Models

Schweiz - Red Teaming AI Systems: Offensive Security for ML Models

Deutschland - Red Teaming AI Systems: Offensive Security for ML Models

Czech Republic - Red Teaming AI Systems: Offensive Security for ML Models

Denmark - Red Teaming AI Systems: Offensive Security for ML Models

Estonia - Red Teaming AI Systems: Offensive Security for ML Models

Finland - Red Teaming AI Systems: Offensive Security for ML Models

Greece - Red Teaming AI Systems: Offensive Security for ML Models

Magyarország - Red Teaming AI Systems: Offensive Security for ML Models

Ireland - Red Teaming AI Systems: Offensive Security for ML Models

Luxembourg - Red Teaming AI Systems: Offensive Security for ML Models

Latvia - Red Teaming AI Systems: Offensive Security for ML Models

España - Red Teaming AI Systems: Offensive Security for ML Models

Italia - Red Teaming AI Systems: Offensive Security for ML Models

Lithuania - Red Teaming AI Systems: Offensive Security for ML Models

Nederland - Red Teaming AI Systems: Offensive Security for ML Models

Norway - Red Teaming AI Systems: Offensive Security for ML Models

Portugal - Red Teaming AI Systems: Offensive Security for ML Models

România - Red Teaming AI Systems: Offensive Security for ML Models

Sverige - Red Teaming AI Systems: Offensive Security for ML Models

Türkiye - Red Teaming AI Systems: Offensive Security for ML Models

Malta - Red Teaming AI Systems: Offensive Security for ML Models

Belgique - Red Teaming AI Systems: Offensive Security for ML Models

France - Red Teaming AI Systems: Offensive Security for ML Models

日本 - Red Teaming AI Systems: Offensive Security for ML Models

Australia - Red Teaming AI Systems: Offensive Security for ML Models

Malaysia - Red Teaming AI Systems: Offensive Security for ML Models

New Zealand - Red Teaming AI Systems: Offensive Security for ML Models

Philippines - Red Teaming AI Systems: Offensive Security for ML Models

Singapore - Red Teaming AI Systems: Offensive Security for ML Models

Thailand - Red Teaming AI Systems: Offensive Security for ML Models

Vietnam - Red Teaming AI Systems: Offensive Security for ML Models

India - Red Teaming AI Systems: Offensive Security for ML Models

Argentina - Red Teaming AI Systems: Offensive Security for ML Models

Chile - Red Teaming AI Systems: Offensive Security for ML Models

Costa Rica - Red Teaming AI Systems: Offensive Security for ML Models

Ecuador - Red Teaming AI Systems: Offensive Security for ML Models

Guatemala - Red Teaming AI Systems: Offensive Security for ML Models

Colombia - Red Teaming AI Systems: Offensive Security for ML Models

México - Red Teaming AI Systems: Offensive Security for ML Models

Panama - Red Teaming AI Systems: Offensive Security for ML Models

Peru - Red Teaming AI Systems: Offensive Security for ML Models

Uruguay - Red Teaming AI Systems: Offensive Security for ML Models

Venezuela - Red Teaming AI Systems: Offensive Security for ML Models

Polska - Red Teaming AI Systems: Offensive Security for ML Models

United Kingdom - Red Teaming AI Systems: Offensive Security for ML Models

South Korea - Red Teaming AI Systems: Offensive Security for ML Models

Pakistan - Red Teaming AI Systems: Offensive Security for ML Models

Sri Lanka - Red Teaming AI Systems: Offensive Security for ML Models

Bulgaria - Red Teaming AI Systems: Offensive Security for ML Models

Bolivia - Red Teaming AI Systems: Offensive Security for ML Models

Indonesia - Red Teaming AI Systems: Offensive Security for ML Models

Kazakhstan - Red Teaming AI Systems: Offensive Security for ML Models

Moldova - Red Teaming AI Systems: Offensive Security for ML Models

Morocco - Red Teaming AI Systems: Offensive Security for ML Models

Tunisia - Red Teaming AI Systems: Offensive Security for ML Models

Kuwait - Red Teaming AI Systems: Offensive Security for ML Models

Oman - Red Teaming AI Systems: Offensive Security for ML Models

Slovakia - Red Teaming AI Systems: Offensive Security for ML Models

Kenya - Red Teaming AI Systems: Offensive Security for ML Models

Nigeria - Red Teaming AI Systems: Offensive Security for ML Models

Botswana - Red Teaming AI Systems: Offensive Security for ML Models

Slovenia - Red Teaming AI Systems: Offensive Security for ML Models

Croatia - Red Teaming AI Systems: Offensive Security for ML Models

Serbia - Red Teaming AI Systems: Offensive Security for ML Models

Bhutan - Red Teaming AI Systems: Offensive Security for ML Models

Nepal - Red Teaming AI Systems: Offensive Security for ML Models

Uzbekistan - Red Teaming AI Systems: Offensive Security for ML Models