The old model is gone
For a decade, CISOs could manage AI risk by blocking tools they didn't approve. That era ended when ChatGPT crossed 100 million users in two months. The business will use AI — the question is whether security has a seat at that table or gets bypassed.
What the new mandate looks like
The CISOs gaining influence in 2026 are the ones who came to the AI strategy meeting with a framework, not a list of objections. They own the sanctioned AI stack, run the risk assessment, define the data handling policies, and report AI risk metrics alongside cyber risk at the board level.
The three decisions that define your posture
Which AI tools are sanctioned and how employees access them. How sensitive data is protected across all AI interactions, sanctioned or not. And how your organization detects, responds to, and learns from AI-related incidents. Everything else is implementation detail.
Building the AI security function
Start with a dedicated AI security role or working group — even two people can run the program. Establish an AI tool review process that can turn around an evaluation in two weeks, not two months. Build relationships with the AI engineering teams before they ship, not after.
Measuring and reporting AI risk
The metrics boards respond to: number of AI tools in active use vs. sanctioned, volume of sensitive-data prompts intercepted per month, time to detect and contain AI-related incidents, and compliance posture against applicable frameworks. Quarterly reporting minimum; monthly is better.