Course Code: secaimod
Duration: 14 hours
Prerequisites:
  • An understanding of machine learning workflows and model training
  • Experience with Python and common ML frameworks such as PyTorch or TensorFlow
  • Familiarity with basic security or threat modeling concepts is helpful

Audience

  • Machine learning engineers
  • Cybersecurity analysts
  • AI researchers and model validation teams
Overview:

Securing AI Models is the discipline of defending machine learning systems against model-specific threats such as adversarial inputs, data poisoning, inversion attacks, and privacy leakage.

This instructor-led, live training (online or onsite) is aimed at intermediate-level machine learning and cybersecurity professionals who wish to understand and mitigate emerging threats against AI models, using both conceptual frameworks and hands-on defenses like robust training and differential privacy.

By the end of this training, participants will be able to:

  • Identify and classify AI-specific threats such as adversarial attacks, inversion, and poisoning.
  • Use tools like the Adversarial Robustness Toolbox (ART) to simulate attacks and test models.
  • Apply practical defenses including adversarial training, noise injection, and privacy-preserving techniques.
  • Design threat-aware model evaluation strategies in production environments.

Format of the Course

  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.

Course Customization Options

  • To request a customized training for this course, please contact us to arrange.
Course Outline:

Introduction to AI Threat Modeling

  • What makes AI systems vulnerable?
  • AI attack surface vs traditional systems
  • Key attack vectors: data, model, output, and interface layers

Adversarial Attacks on AI Models

  • Understanding adversarial examples and perturbation techniques
  • White-box vs black-box attacks
  • FGSM, PGD, and DeepFool methods
  • Visualizing and crafting adversarial samples

Model Inversion and Privacy Leakage

  • Inferring training data from model output
  • Membership inference attacks
  • Privacy risks in classification and generative models

Data Poisoning and Backdoor Injections

  • How poisoned data influences model behavior
  • Trigger-based backdoors and Trojan attacks
  • Detection and sanitization strategies

Robustness and Defense Techniques

  • Adversarial training and data augmentation
  • Gradient masking and input preprocessing
  • Model smoothing and regularization techniques

Privacy-Preserving AI Defenses

  • Introduction to differential privacy
  • Noise injection and privacy budgets
  • Federated learning and secure aggregation

AI Security in Practice

  • Threat-aware model evaluation and deployment
  • Using ART (Adversarial Robustness Toolbox) in applied settings
  • Industry case studies: real-world breaches and mitigations

Summary and Next Steps

Sites Published:

United Arab Emirates - Securing AI Models: Threats, Attacks, and Defenses

Qatar - Securing AI Models: Threats, Attacks, and Defenses

Egypt - Securing AI Models: Threats, Attacks, and Defenses

Saudi Arabia - Securing AI Models: Threats, Attacks, and Defenses

South Africa - Securing AI Models: Threats, Attacks, and Defenses

Brasil - Securing AI Models: Threats, Attacks, and Defenses

Canada - Securing AI Models: Threats, Attacks, and Defenses

中国 - Securing AI Models: Threats, Attacks, and Defenses

香港 - Securing AI Models: Threats, Attacks, and Defenses

澳門 - Securing AI Models: Threats, Attacks, and Defenses

台灣 - Securing AI Models: Threats, Attacks, and Defenses

USA - Securing AI Models: Threats, Attacks, and Defenses

Österreich - Securing AI Models: Threats, Attacks, and Defenses

Schweiz - Securing AI Models: Threats, Attacks, and Defenses

Deutschland - Securing AI Models: Threats, Attacks, and Defenses

Czech Republic - Securing AI Models: Threats, Attacks, and Defenses

Denmark - Securing AI Models: Threats, Attacks, and Defenses

Estonia - Securing AI Models: Threats, Attacks, and Defenses

Finland - Securing AI Models: Threats, Attacks, and Defenses

Greece - Securing AI Models: Threats, Attacks, and Defenses

Magyarország - Securing AI Models: Threats, Attacks, and Defenses

Ireland - Securing AI Models: Threats, Attacks, and Defenses

Luxembourg - Securing AI Models: Threats, Attacks, and Defenses

Latvia - Securing AI Models: Threats, Attacks, and Defenses

España - Securing AI Models: Threats, Attacks, and Defenses

Italia - Securing AI Models: Threats, Attacks, and Defenses

Lithuania - Securing AI Models: Threats, Attacks, and Defenses

Nederland - Securing AI Models: Threats, Attacks, and Defenses

Norway - Securing AI Models: Threats, Attacks, and Defenses

Portugal - Securing AI Models: Threats, Attacks, and Defenses

România - Securing AI Models: Threats, Attacks, and Defenses

Sverige - Securing AI Models: Threats, Attacks, and Defenses

Türkiye - Securing AI Models: Threats, Attacks, and Defenses

Malta - Securing AI Models: Threats, Attacks, and Defenses

Belgique - Securing AI Models: Threats, Attacks, and Defenses

France - Securing AI Models: Threats, Attacks, and Defenses

日本 - Securing AI Models: Threats, Attacks, and Defenses

Australia - Securing AI Models: Threats, Attacks, and Defenses

Malaysia - Securing AI Models: Threats, Attacks, and Defenses

New Zealand - Securing AI Models: Threats, Attacks, and Defenses

Philippines - Securing AI Models: Threats, Attacks, and Defenses

Singapore - Securing AI Models: Threats, Attacks, and Defenses

Thailand - Securing AI Models: Threats, Attacks, and Defenses

Vietnam - Securing AI Models: Threats, Attacks, and Defenses

India - Securing AI Models: Threats, Attacks, and Defenses

Argentina - Securing AI Models: Threats, Attacks, and Defenses

Chile - Securing AI Models: Threats, Attacks, and Defenses

Costa Rica - Securing AI Models: Threats, Attacks, and Defenses

Ecuador - Securing AI Models: Threats, Attacks, and Defenses

Guatemala - Securing AI Models: Threats, Attacks, and Defenses

Colombia - Securing AI Models: Threats, Attacks, and Defenses

México - Securing AI Models: Threats, Attacks, and Defenses

Panama - Securing AI Models: Threats, Attacks, and Defenses

Peru - Securing AI Models: Threats, Attacks, and Defenses

Uruguay - Securing AI Models: Threats, Attacks, and Defenses

Venezuela - Securing AI Models: Threats, Attacks, and Defenses

Polska - Securing AI Models: Threats, Attacks, and Defenses

United Kingdom - Securing AI Models: Threats, Attacks, and Defenses

South Korea - Securing AI Models: Threats, Attacks, and Defenses

Pakistan - Securing AI Models: Threats, Attacks, and Defenses

Sri Lanka - Securing AI Models: Threats, Attacks, and Defenses

Bulgaria - Securing AI Models: Threats, Attacks, and Defenses

Bolivia - Securing AI Models: Threats, Attacks, and Defenses

Indonesia - Securing AI Models: Threats, Attacks, and Defenses

Kazakhstan - Securing AI Models: Threats, Attacks, and Defenses

Moldova - Securing AI Models: Threats, Attacks, and Defenses

Morocco - Securing AI Models: Threats, Attacks, and Defenses

Tunisia - Securing AI Models: Threats, Attacks, and Defenses

Kuwait - Securing AI Models: Threats, Attacks, and Defenses

Oman - Securing AI Models: Threats, Attacks, and Defenses

Slovakia - Securing AI Models: Threats, Attacks, and Defenses

Kenya - Securing AI Models: Threats, Attacks, and Defenses

Nigeria - Securing AI Models: Threats, Attacks, and Defenses

Botswana - Securing AI Models: Threats, Attacks, and Defenses

Slovenia - Securing AI Models: Threats, Attacks, and Defenses

Croatia - Securing AI Models: Threats, Attacks, and Defenses

Serbia - Securing AI Models: Threats, Attacks, and Defenses

Bhutan - Securing AI Models: Threats, Attacks, and Defenses

Nepal - Securing AI Models: Threats, Attacks, and Defenses

Uzbekistan - Securing AI Models: Threats, Attacks, and Defenses