axi
Book a Call
Want results like this?Book a Call
← Back to blog
Case StudyApr 24, 20267 min read

How AI Cut Contract Review Time by 71% for a Legal Team

A case study on the AI contract review system we built for a corporate legal team that slashed review time, caught more risk, and freed 2,800 hours a year.

Contracts Automated

A 14-lawyer in-house legal team at a mid-market SaaS company came to us buried under 380 vendor and customer contracts a month. Average turnaround was 6.4 business days. Sales complained loudly. Procurement complained quieter but longer. Within 11 weeks, we built an AI contract review system that cut turnaround by 71%, surfaced 2.3x more material risks on first pass, and gave the legal team back roughly 2,800 hours a year. Here is exactly how we scoped it, built it, and rolled it out.

When the GC first called us, her team was spending more than 60% of their time on first-pass contract review. Not negotiation. Not strategy. Not the work a senior attorney is actually paid to do. Just reading NDAs, MSAs, DPAs, SOWs, and order forms to flag deviations from the company's standard positions.

The math was ugly. Their average contract ran 14 pages. Average manual review time was 47 minutes. That meant the team was burning roughly 11,800 hours a year on a task that was, at its core, pattern matching against a known playbook.

Three symptoms the business felt:

  • Sales cycle drag. Deals over $250K were stalling 8-12 days on legal review alone.
  • Inconsistent standards. Different attorneys flagged different issues on near-identical contracts. Internal partners couldn't predict what would come back.
  • Senior attorney burnout. The two most experienced lawyers on the team were spending half their week on work their playbook already answered.

The GC didn't need a new headcount. She needed a system that could do the mechanical 70% of the work so her team could own the strategic 30%.

How we scoped the project

Before writing a single prompt or schema, we ran our standard AI project scoping process. Three things had to be true before we would build:

  1. The workflow had to be high volume and repetitive. 380 contracts a month cleared that bar easily.
  2. The company had to have a documented playbook. Without a source of truth, AI has nothing to compare against.
  3. Review outputs had to land in a tool the team already used. If adoption requires logging into a new system, adoption dies.

The team had a 47-page negotiation playbook last updated six months earlier. Good enough. Their contracts lived in Ironclad. Their daily work happened in Slack and Outlook. Those constraints shaped everything we built next.

What we built

The system runs in four stages. Each stage has a clear handoff and a human-in-the-loop checkpoint.

Stage 1: Intake and classification

When a new contract lands in Ironclad, a webhook fires into our automation layer. An AI classifier tags the document by type (NDA, MSA, DPA, SOW, order form, amendment) and by counterparty risk tier based on the company's internal vendor database. Wrong document type routes to the right reviewer. Unknown counterparty triggers a Dun & Bradstreet and sanctions screen before any substantive review begins.

This alone eliminated about 11% of attorney time. Attorneys had been doing this triage manually.

Stage 2: Playbook-driven clause extraction and diff

The core of the system. An LLM extracts every material clause from the contract and diffs it against the company's playbook positions for that contract type. Not a raw text diff. A position-level comparison.

For each clause the system outputs:

  • The company's target position from the playbook
  • The counterparty's proposed position
  • A plain-English summary of the delta
  • A risk rating: green, yellow, or red
  • The suggested fallback or rejection from the playbook

We used a two-model architecture here. A cheaper, faster model did first-pass extraction. A stronger reasoning model validated the flagged deltas. This cut inference cost by roughly 60% versus running everything through the top-tier model, with no measurable accuracy loss on the client's test set.

Stage 3: Review memo generation

The system compiles findings into a structured review memo the attorney opens in Ironclad. Not a wall of text. A table of flagged clauses sorted by risk, each with a one-click "accept the counterparty's position," "push back with playbook language," or "escalate to senior" action. Every action writes back to Ironclad and updates the deal record in Salesforce automatically.

Attorneys told us this was the single biggest time saver. Not the AI reading the contract. The AI doing the clerical work around the decision.

Stage 4: Learning loop

Every attorney override becomes training data. When a lawyer accepts a position the system flagged yellow, or pushes back on a position the system marked green, that decision feeds a weekly review. Every Friday the senior attorney on rotation reviews the deltas and decides whether the playbook needs an update or the system needs a tuning pass.

After 10 weeks of live use, the playbook had been updated 23 times based on patterns the system surfaced. The AI was making the playbook better, not just enforcing it.

The results after 90 days

We measured against a 90-day baseline collected before launch. The numbers:

  • Average turnaround: 6.4 days → 1.8 days. A 71% reduction.
  • Deals over $250K unblocked 9 days faster on average. Sales leadership called this the single biggest operational win of the quarter.
  • Material risks surfaced per contract: 2.3x increase. The AI catches issues tired humans miss at 4 PM on a Thursday.
  • Attorney hours on first-pass review: down 73%. Redeployed to negotiation strategy, vendor management, and two new work streams the team hadn't had capacity for.
  • Cost savings: approximately $420K annualized, net of the system's operating cost. Most of that was from not hiring two additional attorneys the team had been budgeted for.

The GC's quote at the 90-day review: "This is the first time in my career I've watched my team do more strategic work in less time. It doesn't feel like we replaced anyone. It feels like we finally let the lawyers be lawyers."

What made this work (and what usually kills projects like this)

We've seen legal AI projects fail more often than they succeed. Four things made this one different.

We started with the playbook, not the model. The AI is only as good as the standard it compares against. Teams that skip playbook cleanup before building get inconsistent output and blame the model.

We kept humans in the loop on every decision. The system never auto-accepts or auto-rejects. It drafts. Attorneys decide. That's the only architecture that survives contact with real legal risk.

We integrated where the work already lived. Ironclad, Slack, Salesforce, Outlook. Zero new interfaces. Adoption was 100% in week two because nobody had to learn a new tool.

We measured the right things. Turnaround time, risks caught, attorney hours redeployed. Not "AI accuracy" in a vacuum. The business cared about cycle time and coverage, so those were the metrics we optimized.

A few things to steal, whether you work with us or not:

  • Audit your repetitive review work first. If more than half of a senior person's time is pattern matching against a known standard, AI contract review (or its equivalent in your function) will almost certainly pay back inside a year.
  • Clean up your playbook before you buy any tool. A six-month-old playbook with clear positions is worth more than any vendor demo.
  • Budget for integration, not just the model. In our experience the AI itself is 30% of the build. The other 70% is getting the outputs into the tools your team already uses.
  • Pick a workflow with a clear metric. If you can't say exactly what "good" looks like in numbers, the project will drift.

If you're sitting on a legal, procurement, or finance team where skilled people are spending half their week on first-pass review, the automation economics in 2026 are no longer a close call. The tooling is real. The workflows are mapped. The ROI shows up inside a quarter.

We've now built systems like this for legal, finance, procurement, and operations teams across more than 1,000 projects. If you want to see what it would look like for your team, get started here and we'll map it on a 30-minute call.

Share this article

click the sparks to score!
Mini Game
Score0

Why Wait to Get Started?

Book a CallLet's Go 🚀
AXI automated 12 workflows today