All posts

How to Evaluate Legal AI Security (2026)

"Is it SOC 2 compliant?" is a starting point, not a finish line. Evaluating legal AI security requires understanding what actually protects your clients' data — and most law firms aren't asking the right questions. The stakes are uniquely high in legal practice: client confidentiality isn't just a best practice, it's an ethical obligation enforced by bar associations.

This checklist covers what matters, what doesn't, and the questions that separate secure tools from marketing claims.

Why Legal AI Security Is Different

General business software handles sensitive data. Legal AI handles privileged data — information protected by the attorney-client privilege and work product doctrine. The distinction matters because:

  • Privilege can be waived by disclosure to third parties. If your AI vendor's employees can access the content of your legal communications, you may have a privilege problem.
  • Ethical obligations under Model Rules 1.1 (competence) and 1.6 (confidentiality) require lawyers to understand how their technology handles client information. "I didn't know" is not a defense.
  • Data training is the biggest risk most firms underestimate. If the AI vendor uses your inputs to train or improve their models, your client's confidential information could influence responses to other users.

The Security Evaluation Checklist

#### Tier 1: Non-Negotiable Requirements

These are baseline requirements. Any tool that doesn't meet all of them should be eliminated from consideration.

SOC 2 Type II certification

SOC 2 Type I means the controls exist on paper. Type II means they've been independently audited as operating effectively over time. Insist on Type II. Ask for the audit report — reputable vendors share it under NDA.

Tools with verified SOC 2: Relativity, Everlaw, Clio, Ironclad

Data training opt-out (or no training)

The single most important question: Does the vendor use your inputs to train their AI models? The answer must be a clear, contractual "no" — not buried in a 40-page terms of service.

Ask specifically: "Will any content I input — documents, queries, or results — be used to train, fine-tune, or improve your AI models or any third-party models?" Get the answer in writing.

Encryption in transit and at rest

AES-256 at rest, TLS 1.2+ in transit. This is table stakes in 2026, but verify it rather than assuming.

Access controls

Who at the vendor can access your data? Under what circumstances? The answer should involve role-based access controls, audit logging, and a minimal-access policy. "Our engineers can access data for debugging" is a red flag for legal use.

#### Tier 2: Important for Most Firms

Data residency

Where is your data physically stored? For firms handling matters involving EU data (GDPR), government contracts, or clients with data localization requirements, this matters. Some tools offer region-specific storage; others don't.

Retention and deletion policies

What happens to your data after you stop using the service? How quickly is it deleted? Can you request immediate deletion? The retention period should be clearly stated and as short as possible.

Sub-processor transparency

Most AI tools rely on third-party infrastructure — cloud providers, AI model APIs, analytics services. Each sub-processor that touches your data extends your risk surface. Ask for the complete sub-processor list.

Business Associate Agreement (BAA) availability

For firms handling healthcare-related matters, HIPAA compliance requires a BAA. Not every legal AI tool offers one. If you handle any health-related data, this isn't optional.

Tools that commonly offer BAAs: Clio, Relativity

#### Tier 3: Differentiators for Security-Conscious Firms

Private deployment options

Some enterprise tools offer single-tenant or on-premises deployment. This eliminates multi-tenant data commingling risks entirely. Available from: Harvey, Luminance, Relativity

Zero-retention AI processing

The AI processes your query and returns results without retaining the input or output. This is the gold standard for privileged communications. Ask whether the underlying AI model provider (OpenAI, Anthropic, etc.) also operates under zero-retention terms.

Compliance certifications beyond SOC 2

ISO 27001, FedRAMP (for government work), HIPAA, and industry-specific certifications provide additional assurance. Not all are necessary for every firm, but they indicate a vendor that takes security seriously as a core function rather than an afterthought.

Penetration testing results

Vendors who conduct regular third-party penetration testing and share results (under NDA) demonstrate confidence in their security posture. Ask when the last pen test was conducted and what the findings were.

Questions Vendors Don't Want You to Ask

These are the questions that separate vendors with genuine security from those with security marketing:

1. "Can I see your SOC 2 Type II report?" If they hesitate, that tells you something.

2. "What happens to my data if you go out of business?" Many legal AI startups are venture-funded with uncertain futures. Your data disposition plan shouldn't depend on the company's solvency.

3. "Does your AI model provider have separate terms of service that govern my data?" Many legal AI tools are built on top of OpenAI, Anthropic, or Google models. The wrapper company's privacy policy is meaningless if the underlying model provider retains training rights.

4. "Have you had a data breach? When was the last security incident?" Honest vendors acknowledge past incidents and explain what they learned. Vendors who claim zero incidents ever are either very new or not being transparent.

5. "Can you contractually guarantee that my data will never be used for model training, including by your AI infrastructure providers?" This is the question that matters most. Get it in the contract, not just the FAQ.

A Framework for Decision-Making

| Risk Level | What to Require | Example Use Cases |

|-----------|----------------|-------------------|

| Low risk | SOC 2, no-training policy, encryption | Internal workflow automation, time tracking |

| Medium risk | All above + data residency, deletion policies, BAA if needed | Document drafting, billing optimization |

| High risk | All above + private deployment, zero retention, pen test reports | Privileged communications, M&A due diligence, litigation strategy |

The level of security scrutiny should match the sensitivity of the data you're processing. Using AI to generate a first draft of a marketing email for your firm? Standard security is fine. Using AI to analyze privileged litigation strategy documents? Maximum security posture, no exceptions.

Every Tool on ailegal.team Notes Security

We flag SOC 2 status, data handling practices, and suitability for privileged work on every tool listing in our directory. Because evaluating security shouldn't require a separate research project for each tool you're considering.

Browse tools by security posture: All Legal AI Tools →

Built a secure legal AI tool? Submit it to our directory → — listing is free.

Like this kind of analysis?

We send a monthly briefing covering new legal AI tools, compliance updates, and practice-area insights. Same voice as the blog, straight to your inbox.

Free. Monthly. No spam.