Course Code: bsrllmapp
Duration: 14 hours
Prerequisites:
  • An understanding of large language models and prompt-based interfaces
  • Experience building LLM applications using Python
  • Familiarity with API integrations and cloud-based deployments

Audience

  • AI developers
  • Application and solution architects
  • Technical product managers working with LLM tools
Overview:

LLM application security is the discipline of designing, building, and maintaining safe, trustworthy, and policy-compliant systems using large language models.

This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.

By the end of this training, participants will be able to:

  • Understand the core vulnerabilities of LLM-based systems.
  • Apply secure design principles to LLM app architecture.
  • Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
  • Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.

Format of the Course

  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.

Course Customization Options

  • To request a customized training for this course, please contact us to arrange.
Course Outline:

Overview of LLM Architecture and Attack Surface

  • How LLMs are built, deployed, and accessed via APIs
  • Key components in LLM app stacks (e.g., prompts, agents, memory, APIs)
  • Where and how security issues arise in real-world use

Prompt Injection and Jailbreak Attacks

  • What is prompt injection and why it’s dangerous
  • Direct and indirect prompt injection scenarios
  • Jailbreaking techniques to bypass safety filters
  • Detection and mitigation strategies

Data Leakage and Privacy Risks

  • Accidental data exposure through responses
  • PII leaks and model memory misuse
  • Designing privacy-conscious prompts and retrieval-augmented generation (RAG)

LLM Output Filtering and Guarding

  • Using Guardrails AI for content filtering and validation
  • Defining output schemas and constraints
  • Monitoring and logging unsafe outputs

Human-in-the-Loop and Workflow Approaches

  • Where and when to introduce human oversight
  • Approval queues, scoring thresholds, fallback handling
  • Trust calibration and role of explainability

Secure LLM App Design Patterns

  • Least privilege and sandboxing for API calls and agents
  • Rate limiting, throttling, and abuse detection
  • Robust chaining with LangChain and prompt isolation

Compliance, Logging, and Governance

  • Ensuring auditability of LLM outputs
  • Maintaining traceability and prompt/version control
  • Aligning with internal security policies and regulatory needs

Summary and Next Steps

Sites Published:

United Arab Emirates - Building Secure and Responsible LLM Applications

Qatar - Building Secure and Responsible LLM Applications

Egypt - Building Secure and Responsible LLM Applications

Saudi Arabia - Building Secure and Responsible LLM Applications

South Africa - Building Secure and Responsible LLM Applications

Brasil - Building Secure and Responsible LLM Applications

Canada - Building Secure and Responsible LLM Applications

中国 - Building Secure and Responsible LLM Applications

香港 - Building Secure and Responsible LLM Applications

澳門 - Building Secure and Responsible LLM Applications

台灣 - Building Secure and Responsible LLM Applications

USA - Building Secure and Responsible LLM Applications

Österreich - Building Secure and Responsible LLM Applications

Schweiz - Building Secure and Responsible LLM Applications

Deutschland - Building Secure and Responsible LLM Applications

Czech Republic - Building Secure and Responsible LLM Applications

Denmark - Building Secure and Responsible LLM Applications

Estonia - Building Secure and Responsible LLM Applications

Finland - Building Secure and Responsible LLM Applications

Greece - Building Secure and Responsible LLM Applications

Magyarország - Building Secure and Responsible LLM Applications

Ireland - Building Secure and Responsible LLM Applications

Luxembourg - Building Secure and Responsible LLM Applications

Latvia - Building Secure and Responsible LLM Applications

España - Building Secure and Responsible LLM Applications

Italia - Building Secure and Responsible LLM Applications

Lithuania - Building Secure and Responsible LLM Applications

Nederland - Building Secure and Responsible LLM Applications

Norway - Building Secure and Responsible LLM Applications

Portugal - Building Secure and Responsible LLM Applications

România - Building Secure and Responsible LLM Applications

Sverige - Building Secure and Responsible LLM Applications

Türkiye - Building Secure and Responsible LLM Applications

Malta - Building Secure and Responsible LLM Applications

Belgique - Building Secure and Responsible LLM Applications

France - Building Secure and Responsible LLM Applications

日本 - Building Secure and Responsible LLM Applications

Australia - Building Secure and Responsible LLM Applications

Malaysia - Building Secure and Responsible LLM Applications

New Zealand - Building Secure and Responsible LLM Applications

Philippines - Building Secure and Responsible LLM Applications

Singapore - Building Secure and Responsible LLM Applications

Thailand - Building Secure and Responsible LLM Applications

Vietnam - Building Secure and Responsible LLM Applications

India - Building Secure and Responsible LLM Applications

Argentina - Building Secure and Responsible LLM Applications

Chile - Building Secure and Responsible LLM Applications

Costa Rica - Building Secure and Responsible LLM Applications

Ecuador - Building Secure and Responsible LLM Applications

Guatemala - Building Secure and Responsible LLM Applications

Colombia - Building Secure and Responsible LLM Applications

México - Building Secure and Responsible LLM Applications

Panama - Building Secure and Responsible LLM Applications

Peru - Building Secure and Responsible LLM Applications

Uruguay - Building Secure and Responsible LLM Applications

Venezuela - Building Secure and Responsible LLM Applications

Polska - Building Secure and Responsible LLM Applications

United Kingdom - Building Secure and Responsible LLM Applications

South Korea - Building Secure and Responsible LLM Applications

Pakistan - Building Secure and Responsible LLM Applications

Sri Lanka - Building Secure and Responsible LLM Applications

Bulgaria - Building Secure and Responsible LLM Applications

Bolivia - Building Secure and Responsible LLM Applications

Indonesia - Building Secure and Responsible LLM Applications

Kazakhstan - Building Secure and Responsible LLM Applications

Moldova - Building Secure and Responsible LLM Applications

Morocco - Building Secure and Responsible LLM Applications

Tunisia - Building Secure and Responsible LLM Applications

Kuwait - Building Secure and Responsible LLM Applications

Oman - Building Secure and Responsible LLM Applications

Slovakia - Building Secure and Responsible LLM Applications

Kenya - Building Secure and Responsible LLM Applications

Nigeria - Building Secure and Responsible LLM Applications

Botswana - Building Secure and Responsible LLM Applications

Slovenia - Building Secure and Responsible LLM Applications

Croatia - Building Secure and Responsible LLM Applications

Serbia - Building Secure and Responsible LLM Applications

Bhutan - Building Secure and Responsible LLM Applications

Nepal - Building Secure and Responsible LLM Applications

Uzbekistan - Building Secure and Responsible LLM Applications