Los Angeles courts are using AI as a backlog triage system, not as an automated judge. That distinction matters because the operational gains are real, but the pilot’s value will be decided by a narrower question: whether tools that speed drafting, document intake, and case routing can reduce pressure on a system handling 1.3 million filings a year without pushing judges toward machine-shaped decisions or weakening public trust.
Where the pressure is actually building
The scale problem is concrete. The Los Angeles Superior Court processes roughly 1.3 million filings annually across 36 courthouses, while the federal Immigration Court backlog has climbed past 3.3 million active cases. Those are different systems, but together they show why courts are looking for throughput tools rather than waiting for staffing changes alone to absorb years of accumulated delay.
Los Angeles is also attacking congestion through routing, not just software. The court is sending about 3,000 civil unlimited cases each year into mediation to keep more matters out of trial. That matters because the bottleneck is not a single step. Backlog builds in intake, document handling, motion review, scheduling, and trial capacity, so any serious response has to separate tasks that can be accelerated from decisions that still require a judge’s independent analysis.
Learned Hand is aimed at motions, not final authority
The pilot program centers on Learned Hand, an AI platform that summarizes motions and prepares tentative rulings for judges to review. In practice, that makes it closer to a drafting and sorting layer than a decision engine. California Rule 10.430 is the key guardrail: AI use in judicial work requires disclosure, human verification, and bias auditing, and judges in the pilot must review and edit any AI-generated draft before anything moves forward.
That legal structure is the clearest answer to the common misreading that AI is replacing judicial decision-making. It is not. The court is trying to reduce time spent on repetitive preparation, while preserving a line between assistance and adjudication. Learned Hand CEO Shlomo Klapper has described the product as support for overburdened courts facing heavier paperwork loads and more self-represented litigants, and the company says its software is already used in ten states, including by the Michigan Supreme Court for appeals application review.
The same targeted design shows up outside chambers. AI-based document processing in clerks’ offices is reportedly reaching about 98% accuracy, which allows staff to move off repetitive clerical work and onto exceptions, corrections, and more complex case support. That is an operational gain, but it is not the same thing as faster justice unless those time savings carry through to actual case resolution.
Efficiency signals versus independence risks
The strongest case for these tools is narrow and measurable: less manual sorting, faster draft preparation, fewer staff hours spent on paperwork, and more chances to redirect cases into mediation before they consume trial time. The strongest objection is also narrow and measurable: even a “tentative” AI draft can create anchoring effects if a judge sees a suggested framing before conducting a full independent analysis.
Los Angeles County District Attorney Nathan Hochman has warned that AI-generated tentative rulings could bias judges at the front end of review. Judges outside the pilot have raised similar concerns anonymously, focusing on the psychology rather than on formal delegation. That gets to the real governance issue. The main risk is not a court openly handing decisions to software; it is a court gradually normalizing machine-prepared reasoning in a way that shifts human judgment without fully admitting it. Learned Hand says it built safeguards such as “Deep Verify,” which hyperlinks citations back to source materials so judges can check the basis for an output directly, but the existence of a verification tool is not the same as proof that judges will use it consistently under workload pressure.
A useful comparison: backlog triage, hidden records, and the limits of automation
One reason not to confuse AI throughput with system repair is the recent disclosure of nearly 330,000 previously unreported criminal records. That episode shows how data cleanup and faster processing can surface old failures with immediate social and economic consequences, including risks to jobs, licensing, and legal status for affected individuals. In other words, speeding the machinery can expose unresolved defects rather than solve them.
The distinction is easier to see side by side:
| Area | What AI can realistically do | What it cannot resolve on its own |
|---|---|---|
| Motion review workflow | Summarize filings, draft tentative rulings, speed document review | Replace judicial reasoning or remove the need for human verification |
| Clerks’ office processing | Automate repetitive intake and classification with reported 98% accuracy | Eliminate exceptions, disputed records, or downstream legal consequences |
| Backlog reduction | Relieve bottlenecks when combined with mediation and staff reallocation | Instantly clear decades of accumulated cases across multiple court systems |
| Data correction | Surface missing or inconsistent records faster | Control the social and economic fallout once records are disclosed |
The checkpoint before wider expansion
The pilot carries a budget of about $300,000 and runs into early 2027, with its primary focus on civil motions and only limited exploration in criminal court. That timeline matters. The next serious checkpoint is not whether AI can produce readable drafts; it is whether the court can show sustained backlog reduction over time while preserving a clear audit trail of human review, citation checking, and judicial independence.
If the only visible outcome is faster document movement, the project will look like administrative modernization. If the court can show that mediation routing, clerical automation, and judge-side drafting support together reduce delay without increasing confidence concerns, then the pilot becomes something more durable. If not, the technology will have exposed the same limit seen in many overloaded systems: triage can improve flow, but it cannot substitute for legitimacy.
Short Q&A
Does this mean judges are letting AI decide cases?
No. Under California Rule 10.430, judges must review and verify AI-assisted work, and the Los Angeles pilot is structured around support functions such as summaries and tentative drafts.
Can this clear the backlog quickly?
It can reduce pressure at specific bottlenecks, but it cannot rapidly erase decades of accumulated caseloads on its own.
What is the main warning sign to watch?
Whether tentative AI drafts begin to shape outcomes through anchoring, even when formal decision authority remains with judges.
What would count as real success by 2027?
Lower sustained backlog pressure, visible human oversight, and no meaningful erosion in confidence around impartiality.

