Why most assessments fail to drive action
The typical AI risk assessment lists every possible risk, assigns a 1-5 severity score, and produces a 40-page report. Security teams read it, agree it is accurate, and move on. The problem is not the assessment — it is the absence of a prioritized, time-bound action plan attached to it.
Phase 1: AI system inventory
You cannot assess risk for systems you don't know exist. Spend the first week building a complete inventory: sanctioned tools (check your procurement records and IdP app catalog), shadow tools (browser extension telemetry and network logs), and internal AI systems your engineering teams have built. This list is the foundation of everything that follows.
Phase 2: Risk classification
For each system, assess three dimensions: data sensitivity (what types of data does it access), action capability (can it take actions or just generate outputs), and exposure surface (is it internal-only or externally accessible). Systems that score high on all three are your tier-1 risks. Start there.
Phase 3: Control gap analysis
Map your existing controls against each tier-1 system. The gaps that appear most frequently: no inline data inspection, no audit trail for AI interactions, no access controls preventing over-privileged use, and no incident response playbook that covers AI-specific scenarios. Document each gap with the specific control that would close it.
Phase 4: The prioritized roadmap
Order your remediation items by risk reduction per dollar and time-to-implement. Inline DLP for your top three AI tools typically ranks first — high risk reduction, fast to deploy. Agent governance controls typically rank second. Compliance documentation third. Build a 90-day plan and review it monthly.