
From LLM to agentic AI: prompt injection got worse
How the shift from single-model LLM integrations to agentic AI systems amplifies prompt injection into a multi-step attack chain.
As a speaker with international conference experience (Black Hat Arsenal USA, DEF CON AppSec Village USA, RSA Conference USA, Oracle JavaOne, Black Hat Arsenal Europe, Black Hat Arsenal Asia, DeepSec, BruCON, OWASP AppSecEU, OWASP AppSec Days, DevOpsCon Berlin/Munich/London/Singapore, JAX, Heise DevSec, Heise Sec-IT, Heise Herbstcampus, RuhrSec, JCon, JavaLand, Internet Security Days, IT-Tage Frankfurt, OOP, and others) I’m definitely enjoying to speak, present keynotes, and train about IT-Security topics.

Attacking web applications, backends, APIs, and mobile apps in order to find vulnerabilities before others do.

Checking the security and hardening of your cloud infrastructure and services against best-practices.

Security review of your container orchestration platform (Kubernetes) as part of a defense-in-depth approach.

How the shift from single-model LLM integrations to agentic AI systems amplifies prompt injection into a multi-step attack chain.

Learn how dependency cooldowns protect against supply chain attacks by delaying automatic adoption of new package versions.

A pragmatic defense-first guide for modern DevOps.
