Generative AI Security (English)
placeNieuwegein 10 feb. 2026Toon rooster event 10 februari 2026, 08:45-16:00, Nieuwegein, Day 1 |
placeNieuwegein 1 jun. 2026Toon rooster event 1 juni 2026, 08:45-16:00, Nieuwegein, Day 1 |
Vijfhart, dat klopt voor jou!
- Oefenomgeving tot 3 maanden na afronding beschikbaar
- Flexgarantie: wissel eenvoudig van virtueel naar fysiek, of andersom
- Kennisgarantie: volg jouw cursus gratis nog een keer, fysiek of virtueel
- Ontvang een gratis proefexamen bij meerdere opleidingen
- Kies voor een Microsoft-cursus bij Vijfhart en ontvang gratis het officiële Microsoft-examen* t.w.v. max. €155,-.
Lesmethode :
Klassikaal
Algemeen :
Generative AI is changing how we build and secure software. In this one-day training, Generative AI & Security, you will learn the inner workings of modern AI systems and why understanding them is essential for security. The course is developed by the open-source experts at AT Computing and exclusively available via Vijfhart.
You will discover where today's real risks lie and how attackers can misuse large language models to extract sensitive data, leak hidden prompts, or trigger unexpected costs. This course helps you build both awareness and practical skills to work safely and confidently with AI in your daily projects.
By attending this course, you als…
Er zijn nog geen veelgestelde vragen over dit product. Als je een vraag hebt, neem dan contact op met onze klantenservice.
Vijfhart, dat klopt voor jou!
- Oefenomgeving tot 3 maanden na afronding beschikbaar
- Flexgarantie: wissel eenvoudig van virtueel naar fysiek, of andersom
- Kennisgarantie: volg jouw cursus gratis nog een keer, fysiek of virtueel
- Ontvang een gratis proefexamen bij meerdere opleidingen
- Kies voor een Microsoft-cursus bij Vijfhart en ontvang gratis het officiële Microsoft-examen* t.w.v. max. €155,-.
Lesmethode :
Klassikaal
Algemeen :
Generative AI is changing how we build and secure software. In
this one-day training, Generative AI & Security, you will learn
the inner workings of modern AI systems and why understanding them
is essential for security. The course is developed by the
open-source experts at AT Computing and exclusively available via
Vijfhart.
You will discover where today's real risks lie and how attackers
can misuse large language models to extract sensitive data, leak
hidden prompts, or trigger unexpected costs. This course helps you
build both awareness and practical skills to work safely and
confidently with AI in your daily projects.
By attending this course, you also meet the EU AI Act requirements
on AI Literacy. Key topics such as risks, privacy, data sharing,
model bias, and hallucinations are thoroughly addressed, ensuring
that you not only gain technical skills but also the knowledge
needed to remain compliant with upcoming AI regulations.
The course is built around the OWASP Top 10 for LLM Applications,
translating each risk into real-world scenarios you will actually
encounter. We explore input-side threats such as prompt injection,
prompt leakage, and data or model poisoning. You will also examine
output-side pitfalls like insecure handling of generated text,
sensitive information disclosure, and hallucinations with legal or
reputational impact. Finally, we look at architectural issues:
supply-chain vulnerabilities, weaknesses in vector stores and RAG
pipelines, excessive agent permissions, and uncontrolled resource
consumption.
The course is highly interactive and hands-on. In guided lab
sessions, you will practice with real LLM applications: crafting
and detecting prompt injections, simulating poisoning, extracting
secrets, and testing for insecure outputs. For each vulnerability,
we link the exercise to concrete defenses, such as input
validation, output sanitization, guardrails, and robust deployment
strategies so you immediately know how to apply safeguards in
practice.
Doel :
After this course, you can identify, reproduce, and mitigate the
most important security risks in LLM-powered systems and AI-enabled
applications. You'll leave with tested patterns and checklists you
can apply directly in SecOps, DevSecOps, and development
workflows.
This course covers all topics for AI Literacy as required by the EU
AI Act regulation.
Doelgroep :
Security engineers and analysts, SecOps, DevSecOps and software developers.
Voorkennis :
The following prior knowledge is required:
- Technical background
- Some experience in software development or general AI
Onderwerpen :
- Module 1: Introduction to Generative AI and its Security
Implications
- How models are trained and how they work
- Why AI security is critical now
- Module 2: The OWASP LLM Top 10 - A Comprehensive Overview
- Top vulnerabilities affecting Large Language
Models
- Threat landscape and real-world impact
- Module 3: Input-Related Threats
- Prompt Injection
- System Prompt Leakage
- Data & Model Poisoning
- Lab: Prompt-injection and data-poisoning
exercises
- Module 4: Output-Related Threats
- Sensitive Information Disclosure
- Improper Output Handling
- Misinformation & Hallucinations (and legal
implications)
- Lab: Extracting sensitive data & generating
insecure outputs
- Module 5: Infrastructure & Architecture Threats
- Supply Chain Vulnerabilities (incl.
slopsquatting)
- Vector & Embedding Weaknesses in RAG
- Excessive Agency
- Unbounded Consumption (cost abuse)
- Lab: Simulate supply-chain attacks & agent
stress tests
- Module 6: Defensive Strategies and Best Practices
- Robust input validation & output
sanitization
- Guardrails and safety mechanisms in
practice
- Secure AI development lifecycle
- Data handling, model selection, deployment best
practices
Er zijn nog geen veelgestelde vragen over dit product. Als je een vraag hebt, neem dan contact op met onze klantenservice.

