What legacy DLP was built to do
Legacy DLP inspects structured data in motion: a credit card number in an email attachment, a SSN in a file upload. It pattern-matches known formats against known egress paths — a model that worked well when egress meant email and USB drives.
Why prompts shatter the model
When an employee pastes a contract into ChatGPT there is no file, no attachment, and no structured egress event. The sensitive content is a natural-language string inside an HTTPS POST to an external API. Legacy DLP classifies that as normal web traffic and moves on.
Four specific gaps that cause leaks
Prompt payloads are unstructured so regex misses the PII. The egress target is a third-party API, not a corporate mail server. LLM responses can contain sensitive inferences even when the prompt looks clean. And multi-turn context lets sensitive data accumulate across requests that each look innocuous.
What AI-native DLP does differently
A classifier trained on enterprise data patterns evaluates every prompt before it leaves the browser or API client — "the NDA for Project Falcon" triggers a hold even without a regex match. Response inspection and structured redaction logs give compliance teams evidence they can actually use.
Migration path
You layer, not replace. Keep legacy DLP for existing coverage. Add AI-native inspection inline on every AI tool your employees use. Connect both to your SIEM for unified visibility. Budget for this as AI security infrastructure, not a DLP upgrade.