Applied AI — Articles
AI Strategy & Implementation Insights
All Articles
Risk Management · November 2025 · 6 min read

What to Do When AI Gets It Wrong — Managing Errors, Hallucinations & Blind Spots

Every business using AI will eventually get burned by a mistake. The AI confidently stated a wrong fact, cited a policy that doesn't exist, generated a number that was off by a factor of ten, or produced advice that sounded right but wasn't. This isn't a reason to avoid AI — it's a reason to use it intelligently. Here's how to build the reviews and safeguards that catch mistakes before they cause damage.

JM
John Martines
Applied AI — NEPA & Lehigh Valley

Why AI Makes Mistakes

AI models don't "know" things the way humans know things. They predict likely text based on patterns in training data. Most of the time this produces accurate, useful output. Sometimes it produces confident nonsense.

The technical term is "hallucination" — when an AI generates plausible-sounding but factually incorrect information. It's not lying; it genuinely can't distinguish between "information I know is true" and "text that fits this context."

Common Hallucination Patterns

  • Specific statistics and data points — especially older or niche ones
  • Citations and sources — AI invents plausible-sounding references
  • Names and titles — especially for less prominent people
  • Legal and regulatory specifics — outdated or jurisdiction-specific rules
  • "Edge case" knowledge — information that wasn't well-represented in training data

Importantly: hallucination rates vary significantly by model. Newer models hallucinate less, but none are perfect.


The Tasks Where Mistakes Matter Most

Task Category Risk Level Why Mitigation
Legal documents HIGH Errors have legal consequences. Wrong clause, wrong jurisdiction, missed requirement. Always have attorney review AI-drafted legal work.
Financial figures HIGH Calculation errors, wrong data, outdated numbers. Verify all numbers against source data independently.
Medical/health info HIGH Wrong information can cause harm. Never use AI for patient-specific medical guidance without clinical review.
HR policies MEDIUM State-specific employment law; outdated regulations. HR attorney review for formal policies.
Marketing copy LOW Errors are visible and embarrassing but usually correctable. Proofread before publishing.
Internal summaries LOW Stakes are low; errors are caught in normal workflow. Spot-check; don't treat AI summary as authoritative.
Customer communications MEDIUM Wrong info reaches customers. Review before sending, especially for pricing/policy claims.

Building a Review Workflow

1

CLASSIFY YOUR AI TASKS BY RISK

Not everything needs the same review. Internal drafts need a quick read. Legal documents need expert review. Build a simple matrix: what goes straight to use, what gets a quick human check, what requires expert review before use.

2

VERIFY SPECIFIC CLAIMS

Numbers, citations, statistics, names, dates, regulatory references — verify these independently, every time. Don't accept AI's specific factual claims on faith. It takes 30 seconds to verify a statistic; a lawsuit over a wrong legal claim takes much longer.

3

READ FOR WHAT'S MISSING, NOT JUST WHAT'S THERE

AI tends to fill in blanks with plausible content. The risk isn't always wrong information — it's missing information. "Did this AI capture everything important from my notes?" is a different check than "Is this accurate?"

4

USE AI TO CHECK AI

For important documents, a useful technique is to ask AI: "Review this document and identify any factual claims that should be independently verified, any missing information, or any areas where you're uncertain." AI is surprisingly good at flagging its own potential error areas when asked directly.

5

KEEP A FEEDBACK LOG

When AI makes a notable mistake in your workflow, document it: what the task was, what went wrong, and what the correct answer was. Over time, this helps you identify which task types are reliable and which need more scrutiny in your specific environment.


What Good AI Governance Looks Like at an SMB

Good governance doesn't mean bureaucracy — it means a clear shared understanding of how AI is used.

Three-Level Framework

Level 1 (Use freely): Internal drafts, brainstorming, formatting, content variation, summarizing your own notes

Level 2 (Review before use): Customer communications, proposals, anything with specific facts or numbers, anything that represents your business publicly

Level 3 (Expert review required): Legal documents, financial statements, HR policies, medical guidance, regulatory compliance


When Something Goes Wrong — Recovery

Accept that it will happen. The question is whether you catch it before or after it causes a problem.

If Caught Internally

Fix it, note what happened, adjust your review process for that task type.

If Caught by a Customer or Third Party

Respond honestly, correct the record, don't blame the AI (you're responsible for what your business sends).

If It Had Legal or Financial Consequences

Consult appropriate professionals immediately; document the full sequence of events.


The Businesses That Use AI Best

...aren't the ones who trust it most. They're the ones who have thought carefully about where human review adds value and built simple, consistent processes to catch errors before they matter. AI with good oversight is dramatically better than no AI. AI without oversight is a liability.


Applied AI Builds Governance Into Every Solution

Applied AI builds review workflows and governance frameworks into every solution we deploy — not as an afterthought, but as part of the design. If you're using AI in your business without a clear review process, let's talk about what that looks like.