Applied AI — Articles
AI Strategy & Implementation Insights
All Articles
Security & Privacy · August 2025 · 6 min read

AI Security & Data Privacy: What Every SMB Needs to Know Before Going All-In

Most businesses adopt AI tools without thinking carefully about data privacy. Then something happens — a client asks where their data goes, an employee pastes something sensitive into ChatGPT, or IT raises a red flag — and the lack of a policy becomes obvious. Here's what you actually need to know.

JM
John Martines
Applied AI — NEPA & Lehigh Valley

The Risk Nobody Talks About

The AI productivity gains are real. The data risks are also real.

The #1 mistake: employees treating public AI tools (ChatGPT free, Claude free) like a private assistant — pasting in client contracts, financial data, HR records, source code.

Most public AI tools use your conversations for model training by default unless you opt out or use an enterprise/API tier. A single employee pasting a client NDA into ChatGPT's free tier could be a contractual or compliance violation.

This isn't a reason to ban AI — it's a reason to have a policy.


What Actually Happens to Your Data

Platform Free Tier Paid Consumer Enterprise/API
ChatGPT Data may be used for training Plus ($20/mo): can opt out Enterprise/API: not used for training, SOC 2
Claude Data may be used for training Pro ($17-20/mo): can opt out Teams/API: not used for training
Gemini Data used for improvement AI Pro ($20/mo): enhanced controls Google Workspace admin controls
Microsoft Copilot M365 data not used for OpenAI training Business/Enterprise: full data controls Enterprise: E3/E5 compliance features
Meta AI Consumer app has broad data terms No paid consumer tier Llama self-hosted: you control everything

The 5 Rules Every SMB Should Follow

1
Never Paste Sensitive Data Into Free AI Tools
Client contracts, PII, financial records, health data, source code with credentials. Free tiers have the least data protection. If your team uses AI, they need to know this.
2
Use Enterprise or Paid Tiers for Business Use
ChatGPT Business/Enterprise, Claude Teams, Microsoft Copilot, or Google Workspace AI all have contractual data protection. The $20/month is not a luxury — it's a compliance requirement for business use.
3
Opt Out of Training Data Use
Every major platform lets you disable use of your conversations for model training. Find the setting and turn it off. It takes 2 minutes.
4
Set a Clear AI Usage Policy
Document what AI tools employees can use, what data they can share, and who approves new AI tools. This doesn't need to be 20 pages — a one-page policy is better than no policy.
5
Consider Where Your AI Runs
Cloud-based AI sends your data to external servers. For highly sensitive industries (healthcare, legal, defense), self-hosted models (Meta's Llama, local Claude deployments) keep data inside your network.

Industries That Need Extra Caution

Healthcare

HIPAA applies. Patient data in any form cannot go into a public AI tool without a Business Associate Agreement (BAA). Microsoft and some enterprise AI providers offer HIPAA-compliant configurations — but you must verify and document this.

Legal

Attorney-client privilege. Client information fed into an AI tool could potentially be discoverable. Law firms need specific policies and often prefer private deployments.

Financial Services

SOC 2, PCI-DSS considerations. Client financial data requires careful handling regardless of the tool.

Government Contractors

FedRAMP, ITAR, and CUI requirements may restrict what AI tools you can use entirely.

Note: Most NEPA/LV SMBs are not in these heavily regulated categories, but any business with client contracts containing confidentiality clauses should treat those documents as restricted.


The Good News: This is Manageable

You don't need a security team to handle this — you need a policy and the right tier of tools.

The enterprise and paid tiers of major AI platforms are designed for exactly this: business use with appropriate data controls.

Most compliance requirements can be met by simply using the right subscription tier + opting out of training data use.

Applied AI only builds on API-tier deployments — no data goes to model training, all conversations are ephemeral.


Your AI Privacy Checklist

Security & Privacy Quick Check
☐ All employees briefed: no sensitive data in free AI tools
☐ AI tools upgraded to business/enterprise tiers
☐ Training data opt-out enabled on all platforms
☐ Written AI usage policy documented and distributed
☐ High-sensitivity industries: BAA or private deployment verified
☐ New AI tools require IT/management approval before use
Applied AI + Data Privacy

Every Applied AI solution we build uses API-tier deployments with zero training data sharing. We can also help you build the usage policy and training your team needs to use AI safely. Reach out to get started.