AI security stack
1
AI Governance Policy
A written company policy that defines approved AI use,
restricted data, tool approval, monitoring, and
escalation when AI introduces risk.
- Approved AI tools
- Restricted data types
- Tool approval owners
- Monitoring standards
- Risk response expectations
2
Shadow AI Discovery
Identifies unapproved employee use of AI tools before it creates a data leak or compliance issue.- ChatGPT
- Claude
- Gemini
- Perplexity
- Grammarly AI
3
Data Loss Prevention for AI
Prevents employees from entering sensitive business, customer, financial, legal, and intellectual property data into AI tools.- Customer data
- Employee data
- Financial records
- Passwords
- Contracts
4
Microsoft 365 / Google Workspace Security
Protects the collaboration systems where most SMB AI risk begins: email, files, identity, sharing, and external access.- SharePoint / OneDrive / Google Drive
- Teams / Slack
- User accounts
- File sharing
5
Identity and Access Management
Controls who can access AI tools and business data through identity-based security controls.- Single sign-on
- Multi-factor authentication
- Conditional access
- Role-based permissions
- Privileged access controls
6
AI Tool Approval and Vendor Review
Creates a simple approve-or-reject process before employees use AI vendors in the business.- Data storage review
- Model training review
- Data retention options
- Business security controls
- Admin visibility
7
Secure AI Browser Controls
Monitors or blocks risky behavior in browser-based AI tools and web extensions.- Sensitive data pasted into AI sites
- Confidential file uploads
- Personal AI accounts used for work
- Risky browser extensions
- Unauthorized AI websites
8
Approved Business AI Platform
Gives employees a secure AI option instead of forcing them toward random consumer tools.- Microsoft Copilot
- ChatGPT Enterprise / Business
- Google Gemini for Workspace
- Claude Team / Enterprise
- Perplexity Enterprise Pro
9
AI Email and Phishing Security
Protects against AI-enhanced phishing, impersonation, credential theft, account takeover, and business email compromise.- AI-generated phishing emails
- Executive impersonation
- Fake vendor invoices
- Credential theft
- Malicious attachments
10
Endpoint Security
Secures the devices employees use to access AI tools, business systems, and sensitive company data.- EDR
- Antivirus
- Device encryption
- Patch management
- USB controls
11
AI Agent Security
Controls AI agents, phone agents, chat agents, and workflow automation so they cannot overreach, leak data, or act without approval.- Limit agent access
- Limit agent changes
- Require approval for sensitive actions
- Log every agent action
- Restrict integrations
12
Logging and Monitoring
Provides visibility into AI use, data movement, file access, AI agent activity, and unusual behavior.- AI users
- AI tools used
- Data uploaded
- Files accessed
- Agent actions
13
Incident Response for AI
Establishes a practical response plan for AI-related incidents before they become customer, legal, or regulatory issues.- Customer data pasted into ChatGPT
- AI agent sent wrong information
- Sensitive file uploaded to AI
- AI tool connected without approval
- Personal AI account used for business
14
AI Security Training
Trains employees on safe AI use, prohibited data sharing, AI phishing, prompt safety, reporting, and file handling.- Approved AI tools
- Data that cannot be shared
- AI phishing recognition
- Safe prompt usage
- Mistake reporting
