AccuroAI
Product
Solutions
Use Cases
Industries
Company
Resources
Book demo
60+ Terms · 2026 Edition

AI Security &
Governance Glossary

The essential reference for CISOs, security leaders, and AI governance practitioners. Plain English. Updated monthly.

A14 terms

Adversarial Attack

AI Security

A deliberate attempt to manipulate an AI model's inputs, training data, or operating environment to cause it to produce incorrect, harmful, or unintended outputs. Adversarial attacks exploit vulnerabilities unique to machine learning systems and include techniques such as prompt injection, evasion attacks, and model inversion.

Related: Prompt Injection · Model Robustness · Red Teaming

Agentic AI

AI Architecture

An AI system that can autonomously plan, take sequences of actions, use external tools, and pursue goals across multiple steps without requiring human confirmation at each stage. Agentic AI introduces distinct security risks because a compromised or manipulated agent can take real-world actions — sending emails, executing code, or accessing databases — with consequences that extend far beyond generating text.

Related: AI Agent · Autonomous System · Human-in-the-Loop · Prompt Injection

AI Agent

AI Architecture

A software system that uses an AI model to perceive its environment, make decisions, and execute actions in pursuit of defined goals. AI agents may operate independently or as part of multi-agent systems. From a security perspective, every AI agent requires a managed identity, least-privilege access, behavioral monitoring, and a documented action scope.

Related: Agentic AI · Non-Human Identity · Least Privilege

AI Audit Trail

AI Governance

An immutable, chronological record of all AI system interactions, access events, model decisions, and governance actions. Audit trails are the foundational evidence requirement for regulatory compliance under the EU AI Act, GDPR, and NIST AI RMF. An audit-ready organization can produce complete AI interaction logs within 24 hours of a regulatory inquiry.

Related: Immutable Logging · AI Governance · Compliance Evidence

AI Bill of Materials (AI-BOM)

AI Security

A comprehensive inventory of all components that make up an AI system, including foundation models, training datasets, third-party libraries, APIs, and integration dependencies. Similar in purpose to a software bill of materials (SBOM), an AI-BOM enables organizations to identify supply chain vulnerabilities, track provenance, and respond rapidly when components are compromised.

Related: AI Inventory · AI Supply Chain Risk · Third-Party AI Risk

AI Governance

AI Governance

The policies, processes, controls, and organizational structures that define how an enterprise discovers, deploys, monitors, and manages AI systems responsibly and securely. Effective AI governance encompasses identity and access management for AI, data protection, risk assessment, regulatory compliance, and continuous audit readiness.

Related: AI Risk Management · AI Policy · Compliance · Accountability

AI Inventory

AI Governance

A continuously maintained record of every AI system — including models, tools, agents, integrations, and Shadow AI — operating within an organization. AI inventory is the prerequisite for all AI governance activity; organizations cannot govern what they cannot see. A complete inventory includes risk classification, data flows, ownership, and compliance status for each system.

Related: Shadow AI · AI Registry · AI Risk Classification

AI Policy

AI Governance

A written organizational document that specifies which AI tools are permitted, what data may be processed by AI systems, which roles may use which capabilities, how violations are handled, and who is accountable for AI governance outcomes. An AI policy must be specific and enforceable — not aspirational — and must be accompanied by technical controls that enforce its requirements.

Related: AI Governance · Acceptable Use · Shadow AI

AI Risk Assessment

AI Risk Management

A structured evaluation of the potential harms, security vulnerabilities, compliance obligations, and ethical concerns associated with a specific AI system or use case. A comprehensive AI risk assessment covers data privacy risks, model bias, prompt injection vulnerability, access control gaps, regulatory classification, and supply chain exposure.

Related: AI Risk Register · Risk Scoring · EU AI Act Risk Classification

AI Risk Classification

AI Risk Management

The process of categorizing an AI system by its potential to cause harm, based on its application domain, data processed, autonomy level, and regulatory context. The EU AI Act defines four risk tiers: Unacceptable Risk (prohibited), High Risk (strict requirements), Limited Risk (transparency obligations), and Minimal Risk (no specific requirements).

Related: EU AI Act · High-Risk AI · AI Risk Assessment

AI Risk Register

AI Risk Management

A structured document or system that records all identified AI risks across an organization's AI portfolio, including risk descriptions, likelihood and impact scores, applicable regulatory requirements, controls in place, residual risk, and risk ownership. The AI risk register is the operational foundation of an AI risk management program.

Related: AI Risk Assessment · Risk Tolerance · Compliance Evidence

AI Safety

AI Governance

The discipline concerned with ensuring AI systems behave as intended, do not cause unintended harm, and remain within their designated operational parameters. AI safety encompasses both technical controls (robustness testing, anomaly detection, output validation) and governance controls (human oversight, incident response, fail-safe mechanisms).

Related: AI Robustness · Human-in-the-Loop · AI Alignment

AI Supply Chain Risk

AI Security

The security risks that arise from an organization's dependence on external AI components — including foundation models, pre-trained weights, open-source libraries, third-party APIs, and vendor-provided AI features. AI supply chain attacks can introduce backdoors through compromised training data, malicious model weights, or vulnerable dependencies.

Related: AI Bill of Materials · Third-Party AI Risk · Model Integrity

Accountability

AI Governance

The principle that organizations and individuals who deploy AI systems bear responsibility for their behavior, outputs, and impacts — regardless of whether the underlying model was built by a third-party vendor. Accountability requires named ownership for each AI system, documented governance decisions, and the technical controls to enforce stated policies.

Related: AI Governance · Audit Trail · Transparency
B3 terms

Behavioral Monitoring

AI Security

Continuous observation and analysis of AI system actions — including inputs processed, outputs generated, tools invoked, and data accessed — to detect anomalies, policy violations, and potential security incidents. Behavioral monitoring is especially critical for agentic AI systems, where the consequences of undetected anomalous behavior can extend across multiple systems and actions.

Related: Agentic AI · Anomaly Detection · SIEM Integration

Bias (AI)

Responsible AI

Systematic and unfair discrimination in AI model outputs, arising from biased training data, flawed model design, or misaligned optimization objectives. AI bias can produce discriminatory outcomes in high-stakes domains including employment, credit, healthcare, and law enforcement.

Related: Fairness · Bias Testing · High-Risk AI · EU AI Act

Bias Testing

Responsible AI

Systematic evaluation of an AI model's outputs across demographic groups to identify disparate impacts, performance gaps, or discriminatory patterns. Bias testing is required for high-risk AI under the EU AI Act and NYC Local Law 144.

Related: Bias (AI) · Fairness · Conformity Assessment
C4 terms

CASB (Cloud Access Security Broker)

AI Security

A security policy enforcement point positioned between cloud service consumers and cloud service providers that provides visibility, compliance enforcement, data security, and threat protection. For AI security, CASB solutions must be extended to cover AI service endpoints.

Related: Shadow AI · DLP · Cloud AI Security

Compliance Evidence

AI Governance

Operational proof that AI governance controls are functioning as intended. Distinguished from policy documentation — which describes intended controls — compliance evidence demonstrates that controls are actually working. Examples include current access control configurations, AI interaction logs, bias test results, and documented human oversight decisions.

Related: AI Audit Trail · Regulatory Compliance · Conformity Assessment

Conditional Access

AI Security

An access control model that evaluates contextual signals — including user identity, device health, data classification, location, and risk level — at each authentication event before granting or denying access to an AI system.

Related: Zero Trust · Identity and Access Management · MFA

Conformity Assessment

Regulatory Compliance

The process by which a high-risk AI system demonstrates compliance with the technical and governance requirements of the EU AI Act before being placed on the market or put into service. Required for all high-risk AI before August 2, 2026.

Related: EU AI Act · High-Risk AI · AI Risk Classification
D3 terms

Data Loss Prevention (DLP) for AI

AI Security

The extension of data loss prevention controls to cover AI system interactions — including prompts submitted to AI tools, outputs generated by AI, and data retrieved by AI agents. Traditional DLP was not designed for AI interaction patterns. AI-specific DLP requires prompt inspection, output scanning, and policy enforcement at AI service endpoints.

Related: Prompt Inspection · Shadow AI · CASB

Data Poisoning

AI Security

An attack in which adversaries corrupt the training data or fine-tuning data of an AI model to embed backdoors, degrade performance, or introduce systematic biases. Research has demonstrated that as few as 100–500 maliciously crafted samples can corrupt an enterprise RAG pipeline.

Related: Model Integrity · Adversarial Attack · RAG Security

Deepfake

AI Security

AI-generated synthetic media — including video, audio, and images — that convincingly impersonates real individuals. Enterprise deepfake threats include voice-cloned CEO fraud, synthetic identity attacks on authentication systems, and fabricated evidence in legal or regulatory proceedings.

Related: AI-Powered Social Engineering · Identity Fraud · Authentication
E2 terms

EU AI Act

Regulatory Compliance

The European Union's comprehensive AI regulation, enacted June 2024, which establishes a risk-based framework for AI systems used in or affecting EU individuals. It prohibits specific AI applications outright, imposes strict technical and governance requirements on high-risk AI, and mandates transparency for limited-risk systems. High-risk AI provisions are enforceable from August 2, 2026, with penalties up to €35 million or 7% of global annual turnover.

Related: High-Risk AI · Conformity Assessment · AI Risk Classification

Explainability

Responsible AI

The degree to which an AI system's decisions, outputs, and reasoning can be understood by human stakeholders — including operators, affected individuals, and regulators. Explainability is a legal requirement for high-risk AI under the EU AI Act.

Related: Interpretability · Transparency · Human-in-the-Loop
F2 terms

Fairness

Responsible AI

The property of an AI system that produces outcomes without unjustified discrimination based on protected characteristics such as race, gender, age, or national origin. Fairness in AI is both an ethical requirement and a legal obligation in jurisdictions that prohibit algorithmic discrimination.

Related: Bias (AI) · Bias Testing · EU AI Act · Accountability

Foundation Model

AI Architecture

A large-scale AI model trained on broad data that can be adapted to a wide range of downstream tasks through fine-tuning, prompting, or retrieval augmentation. Examples include GPT-4, Claude, Gemini, and Llama. Foundation models are classified as General-Purpose AI (GPAI) under the EU AI Act.

Related: General-Purpose AI · Large Language Model · AI Supply Chain Risk
G2 terms

General-Purpose AI (GPAI)

Regulatory Compliance

An AI model that can perform a wide variety of tasks across different domains without being designed for a specific application. Under the EU AI Act, GPAI models that exceed certain capability thresholds are subject to specific obligations for providers.

Related: Foundation Model · EU AI Act · AI Risk Classification

Governance Artifact

AI Governance

A documented record produced as part of AI governance activities — including risk assessments, bias test results, access control configurations, human oversight logs, and compliance mappings. Governance artifacts constitute the evidence base for regulatory compliance.

Related: Compliance Evidence · AI Audit Trail · AI Risk Register
H3 terms

High-Risk AI

Regulatory Compliance

A category of AI systems defined by the EU AI Act that pose significant risk to health, safety, or fundamental rights. High-risk AI applications include systems used in employment and HR decisions, credit scoring, healthcare diagnostics, education assessment, critical infrastructure management, law enforcement, and border control.

Related: EU AI Act · Conformity Assessment · AI Risk Classification

Human-in-the-Loop

AI Governance

An AI system design in which a human must review and approve AI outputs or recommendations before any consequential action is taken. Human-in-the-loop oversight is required for high-risk AI under the EU AI Act and is a fundamental safeguard against AI errors, bias, and manipulation.

Related: Human-on-the-Loop · Human Oversight · AI Safety

Human-on-the-Loop

AI Governance

An AI system design in which humans monitor AI performance and retain the ability to intervene, but the AI system operates autonomously without requiring approval for each action. Human-on-the-loop is appropriate for lower-risk AI applications and agentic AI with well-constrained action scopes.

Related: Human-in-the-Loop · Agentic AI · Behavioral Monitoring
I4 terms

Identity and Access Management (IAM) for AI

AI Security

The application of identity and access management principles — authentication, authorization, role-based access control, and lifecycle management — to AI systems and the users who interact with them. AI-specific IAM must govern both human identities and non-human identities.

Related: Non-Human Identity · SSO · Least Privilege · Conditional Access

Immutable Logging

AI Governance

The practice of storing AI system logs in a tamper-proof format that prevents modification or deletion after the fact. Immutable logs are the foundation of AI audit readiness — they provide regulators, auditors, and incident responders with a reliable record of what AI systems did, when, and on whose authority.

Related: AI Audit Trail · Compliance Evidence · SIEM Integration

Indirect Prompt Injection

AI Security

A variant of prompt injection in which malicious instructions are embedded in external content — such as documents, web pages, emails, or database records — that an AI system retrieves and processes. It is rated the highest-risk attack vector for agentic AI systems.

Related: Prompt Injection · Agentic AI · RAG Security

Interpretability

Responsible AI

The degree to which the internal workings of an AI model — including its representations, features, and decision pathways — can be understood and analyzed by humans. Distinguished from explainability, which concerns the understandability of outputs; interpretability concerns the model's internal mechanisms.

Related: Explainability · Transparency · Bias (AI)
J1 term

Jailbreaking

AI Security

A class of adversarial techniques designed to bypass the safety guardrails and operational constraints of an AI model, causing it to produce outputs it was designed to refuse. Enterprise jailbreaking attempts are increasing in frequency and sophistication.

Related: Adversarial Attack · Prompt Injection · AI Robustness
L2 terms

Large Language Model (LLM)

AI Architecture

A type of foundation model trained on massive text datasets using transformer architecture that can generate human-like text, answer questions, summarize documents, write code, and perform a wide range of language tasks. LLMs are the most widely deployed AI systems in enterprises and the primary target of prompt injection, jailbreaking, and data extraction attacks.

Related: Foundation Model · Prompt Injection · OWASP LLM Top 10

Least Privilege (AI)

AI Security

The security principle that AI systems — including agentic AI, AI agents, and AI-enabled services — should be granted only the minimum permissions necessary to perform their designated functions.

Related: Non-Human Identity · IAM for AI · Agentic AI
M5 terms

Model Card

AI Governance

A standardized documentation format for AI models that describes the model's intended use, training data, performance characteristics, known limitations, bias evaluation results, and recommended use conditions. Model cards are an emerging best practice for AI transparency.

Related: AI Transparency · AI Bill of Materials · Governance Artifact

Model Drift

AI Risk Management

The degradation of an AI model's performance, accuracy, or behavioral alignment over time, caused by changes in the real-world data distribution that the model was trained on. Model drift can cause production AI systems to produce increasingly inaccurate or biased outputs.

Related: Model Monitoring · Behavioral Monitoring · AI Risk Management

Model Inversion Attack

AI Security

An attack in which an adversary queries an AI model repeatedly to reconstruct sensitive information from its training data — including personal data, proprietary content, or confidential business information.

Related: AI Model Theft · Privacy · Adversarial Attack

Model Integrity

AI Security

The assurance that an AI model has not been tampered with, poisoned, or modified between its verified training state and its deployment in production. Model integrity is maintained through cryptographic signing of model weights, hash verification, and secure deployment pipelines.

Related: Data Poisoning · AI Supply Chain Risk · Model Card

Multi-Agent System

AI Architecture

An architecture in which multiple AI agents interact, collaborate, or compete to accomplish tasks. Multi-agent systems introduce security risks beyond those of individual agents: a compromised agent can propagate malicious instructions to other agents, and trust relationships between agents create new attack surfaces.

Related: Agentic AI · AI Agent · Indirect Prompt Injection
N2 terms

NIST AI Risk Management Framework (AI RMF)

Regulatory Compliance

A voluntary framework published by the National Institute of Standards and Technology that provides organizations with guidance for identifying, assessing, and managing AI risks across the model lifecycle. The AI RMF is organized around four core functions: Govern, Map, Measure, and Manage.

Related: AI Risk Management · EU AI Act · ISO 42001

Non-Human Identity (NHI)

AI Security

Any digital identity that is not associated with a human user — including AI agents, service accounts, API keys, automated workflows, and machine-to-machine credentials. Non-human identities are the fastest-growing AI security gap in enterprise environments.

Related: IAM for AI · Agentic AI · Least Privilege
O1 term

OWASP LLM Top 10

AI Security

The Open Web Application Security Project's list of the ten most critical security risks for applications built on large language models. The OWASP LLM Top 10 is the de facto security standard for LLM application development and includes prompt injection (ranked #1), insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities.

Related: Prompt Injection · LLM · Adversarial Attack
P4 terms

Phishing-Resistant MFA

AI Security

Multi-factor authentication methods — primarily FIDO2 hardware security keys and biometric verification — that cannot be bypassed by phishing attacks, credential harvesting, or real-time proxying. Microsoft's 2025 Digital Defense Report found that phishing-resistant MFA blocks over 99.9% of account compromise attacks.

Related: MFA · Zero Trust · Conditional Access · IAM for AI

Privacy

Responsible AI

The right of individuals to control how their personal information is collected, used, and shared — including by AI systems. AI systems create novel privacy risks: models may memorize training data, prompts may contain sensitive personal information, and AI-generated outputs may inadvertently expose personal data.

Related: GDPR · DLP for AI · Data Minimization · Model Inversion Attack

Prompt Injection

AI Security

The #1 security risk for LLM applications (OWASP LLM Top 10), in which malicious input manipulates an AI model into executing unintended instructions, overriding its system prompt, or exfiltrating data. Defenses include input validation, prompt isolation, principle of least privilege for AI actions, and output monitoring.

Related: Indirect Prompt Injection · Jailbreaking · OWASP LLM Top 10

Prompt Inspection

AI Security

Real-time analysis of user inputs to AI systems before they are processed, to detect personally identifiable information (PII), regulated data, intellectual property, prompt injection attempts, and policy violations. Prompt inspection is the primary technical control for AI data loss prevention.

Related: Output Scanning · DLP for AI · Prompt Injection
R5 terms

RAG (Retrieval-Augmented Generation)

AI Architecture

An AI architecture in which a language model is augmented with a retrieval system that fetches relevant documents or data from an external knowledge base at inference time. RAG enables AI systems to provide responses grounded in current, proprietary, or specialized information. From a security perspective, RAG pipelines introduce data poisoning risk, data leakage risk, and indirect prompt injection risk.

Related: Indirect Prompt Injection · Data Poisoning · LLM

Red Teaming (AI)

AI Security

A structured adversarial testing methodology in which security specialists attempt to compromise an AI system's safety, security, or governance controls — including prompt injection, jailbreaking, model extraction, and data leakage — before attackers do.

Related: Adversarial Attack · Prompt Injection · Jailbreaking

Regulatory Compliance (AI)

Regulatory Compliance

The state of operating AI systems in accordance with all applicable laws, regulations, and standards. For enterprises in 2026, AI regulatory compliance requires simultaneously satisfying the EU AI Act, GDPR/CCPA/HIPAA, NIST AI RMF, ISO 42001, and sector-specific requirements.

Related: EU AI Act · NIST AI RMF · ISO 42001 · GDPR

Risk Tolerance

AI Risk Management

The level of AI-related risk that an organization is willing to accept in pursuit of business objectives. Risk tolerance decisions must be made explicitly and documented formally — not determined implicitly by which AI systems happen to be deployed.

Related: AI Risk Register · AI Risk Assessment · Accountability

Robustness

AI Security

The ability of an AI system to maintain correct, safe, and consistent behavior when exposed to unexpected inputs, adversarial attacks, distributional shifts, or operational variations. Robustness is a technical requirement for high-risk AI under the EU AI Act.

Related: Adversarial Attack · Red Teaming · Model Drift
S3 terms

Shadow AI

AI Security

The use of AI tools, models, and applications within an organization without the knowledge, approval, or oversight of IT and security teams. Shadow AI is the most prevalent and most costly AI security gap: 65% of AI tools in enterprises lack IT approval, and 48% of employees have uploaded sensitive data to unauthorized AI tools.

Related: AI Inventory · CASB · DLP for AI · AI Policy

SIEM Integration (AI)

AI Security

The connection of AI system logs and behavioral monitoring data to a Security Information and Event Management (SIEM) platform, enabling correlation of AI security events with broader organizational security telemetry.

Related: Immutable Logging · Behavioral Monitoring · Incident Response

Single Sign-On (SSO) for AI

AI Security

The integration of all enterprise AI tool access into a centrally managed identity provider (IdP) through SSO, eliminating independent credential management for individual AI tools. SSO for AI is the foundational access control for AI security programs.

Related: IAM for AI · Phishing-Resistant MFA · Shadow AI
T2 terms

Transparency

AI Governance

The property of an AI system and its governance program that makes the system's capabilities, limitations, data practices, and decision logic understandable to affected individuals, regulators, and oversight bodies. The EU AI Act mandates transparency obligations for providers and deployers of AI systems.

Related: Explainability · Accountability · AI Governance

Third-Party AI Risk

AI Security

The security, compliance, and operational risks arising from an organization's use of AI capabilities provided by external vendors — including foundation model APIs, AI-embedded SaaS applications, and vendor-built AI agents.

Related: AI Supply Chain Risk · AI Bill of Materials · Vendor Risk Management
U1 term

Unacceptable-Risk AI

Regulatory Compliance

AI applications that the EU AI Act prohibits outright because their potential for harm is considered incompatible with EU values and fundamental rights. Prohibited categories include AI systems that manipulate individuals through subliminal techniques, exploit psychological vulnerabilities, perform real-time biometric identification in public spaces, enable social scoring by public authorities, and predict crime based on profiling.

Related: EU AI Act · AI Risk Classification · High-Risk AI
V1 term

Vendor Risk Assessment (AI)

AI Risk Management

A structured evaluation of the security, privacy, and compliance posture of third-party AI vendors before their products or services are deployed in an enterprise environment. AI vendor risk assessments should evaluate data handling practices, model training data provenance, security controls, incident response capabilities, regulatory compliance, and the contractual protections offered.

Related: Third-Party AI Risk · AI Supply Chain Risk
Z1 term

Zero Trust (AI)

AI Security

The application of Zero Trust security principles — never trust, always verify; assume breach; enforce least privilege — to AI system access and AI agent behavior. Zero Trust for AI means that no AI tool, agent, or integration receives implicit network-level trust; every access event is authenticated and authorized based on identity, device health, and context.

Related: Conditional Access · Least Privilege · IAM for AI
Ready to secure your AI environment?

AccuroAI helps security teams discover, govern, and protect every AI tool in their organization.

Book a demoTalk to security