When the Lawyer and the Judge use the Same AI
Meta description: As AI in law becomes mainstream, the biggest danger may not be hallucinations or bias, but shared legal AI systems that narrow legal reasoning, reduce corrective openness, and push courts and lawyers toward interpretive homogenization.
Artificial intelligence is rapidly becoming part of everyday legal work. Lawyers use it to summarize cases, draft arguments, compare authorities, structure submissions, and refine language. Courts and judicial systems are also building policies for responsible AI use, usually focusing on accuracy, confidentiality, bias, and accountability. That focus is necessary. But it is no longer sufficient. Illinois warns about authenticity, accuracy, bias, and the integrity of filings and decisions; California requires courts that allow generative AI to prohibit the entry of confidential information into public systems and to verify and correct hallucinated output; New York’s court system frames its own policy around fairness, accountability, security, training, and restrictions on the use of non-private models.
The public discussion on AI in law still tends to revolve around visible failures: fake citations, biased outputs, privacy breaches, or the fear that machines will replace legal professionals. Those are real risks. Yet they are not the deepest one. The deeper risk is structural. It appears when judges, lawyers, clerks, scholars, and institutions increasingly rely on the same or similar AI infrastructures to prepare, frame, filter, and refine legal reasoning.
At that point, the question is no longer whether a model occasionally gets a case wrong. The question becomes whether shared artificial mediation gradually changes the ecology of legal thought itself.
That is where two concepts become useful: interpretive homogenization and synthetic closure.
Interpretive homogenization describes the growing similarity of legal reasoning under shared model architectures. It does not mean that every brief or judgment becomes identical. It means that different actors begin to draw from the same linguistic patterns, the same relevance signals, the same ranking habits, the same implicit assumptions about what counts as persuasive, mainstream, peripheral, risky, or worth ignoring. Diversity remains on the surface, while deeper variation starts to thin out.
Synthetic closure goes further. It refers to the progressive narrowing of a legal system’s openness to revision when common AI systems increasingly shape legal operations. Law has always been self-referential in some sense. It works with precedent, doctrine, hierarchy, institutional memory, codification, and internal methods of correction. But legal self-reference is not a defect. Properly understood, it is part of how law preserves continuity while adapting to change. The problem begins when this already layered and recursive system is compressed through shared AI infrastructures that intensify repetition without preserving enough contact with social reality, dissent, anomaly, and human judgment.
This matters because law is not just a database. It is a living order built over time through conflict, interpretation, codification, institutional differentiation, and revision. Legal development depends on layered continuity, but also on friction. It needs disagreement. It needs outlier facts. It needs unusual arguments. It needs the possibility that a marginal claim today becomes tomorrow’s doctrinal correction. If those corrective openings shrink, law may remain efficient while becoming less corrigible.
This is where model collapse research becomes jurisprudentially important. In the 2024 Nature paper that brought the concept into wide discussion, Ilia Shumailov and co-authors define model collapse as a degenerative process in which generated outputs pollute the training data of later generations, leading models to misperceive reality. They distinguish early collapse, where the tails of the distribution begin to disappear, from late collapse, where the model converges toward something that bears little resemblance to the original distribution. They also stress that access to the original data distribution becomes crucial, especially where the tails matter.
That insight translates powerfully into law.
Legal systems do not only depend on the center of the distribution. They depend on the tails. They depend on hard cases, dissenting lines of reasoning, minority experiences, procedural irregularities, and social facts that do not fit cleanly into existing categories. In fact, many of the great corrections in legal history began at the margins, not at the center. If shared legal AI increasingly favors what is already well-formed, well-cited, widely repeated, and statistically legible, then the tails of legal experience may begin to disappear from effective consideration even before anyone notices.
This does not require full automation. It does not require judges handing over decisions to a model. It can happen earlier and more quietly.
A lawyer uses AI to generate a first structure. Another uses the same model to refine authorities. A clerk uses a related system to summarize submissions. A researcher uses similar tools to map the literature. A court administrator adopts internal AI guidelines built around the same dominant products. A legal publisher integrates AI ranking features into search. A law firm trains younger associates on prompt patterns that mirror the same architecture. Nobody is acting irresponsibly. Nobody is necessarily doing anything unethical. Yet the system as a whole begins to bend toward the same interpretive gravity.
That is why the principal danger of shared legal AI is not simply error. Error can often be corrected. A fake citation can be exposed. A hallucinated case can be struck out. A confidentiality breach can be sanctioned. But a narrowing of interpretive range is much harder to detect, because it often looks like professionalism, efficiency, consistency, or even consensus.
And consensus is precisely where the danger can hide.
A legal system must be stable, but not sealed. It must be coherent, but not frozen. It must filter noise, but not eliminate novelty. If AI systems increasingly overweight the most available formulations, the most frequently cited authorities, and the most statistically reinforced argumentative shapes, then the law may drift toward a closed loop in which it keeps hearing an increasingly polished version of itself.
That loop becomes even more dangerous when its operative priors are hard to see.
Every model begins somewhere. It reflects training choices, data distributions, ranking preferences, optimization strategies, interface design, moderation rules, and reinforcement signals. In legal settings, those starting conditions matter enormously. They influence which sources are surfaced, which formulations are preferred, which analogies feel natural, which facts are foregrounded, and which styles are treated as credible. Over time, these priors can become overweighted. And because they are embedded in systems that save time and improve fluency, their effects may be accepted long before they are critically examined.
This is one reason why “human in the loop” is not enough as a slogan. If every human in the loop is reading, drafting, searching, and deciding through similar AI-mediated frames, then the loop itself may already be narrowing. Human oversight matters, but it must be meaningfully plural, not merely formal.
So what should legal systems do?
First, they should recognize that epistemic diversity is not a luxury in law. It is a safeguard. Courts, firms, universities, and public institutions should avoid total dependence on a single model family, a single vendor logic, or a single workflow culture.
Second, they should preserve contact with original legal and social materials. That means primary sources, uncompressed factual records, live doctrinal disagreement, and human-authored reasoning that has not already been pre-filtered through the same synthetic layer. The lesson from model collapse is simple but profound: if the tails matter, you must protect access to reality beyond the model’s own recursive outputs.
Third, they should audit not only outputs, but convergence. The most important question is not only “Was this answer accurate?” It is also “Are our reasoning patterns becoming too similar?” In the age of shared legal AI, variance is a public good.
Fourth, legal education should train for friction, not just fluency. Future lawyers must know how to use AI, but they must also know how to resist its smoothing effects when necessary. They must be able to detect what is missing, not only what is efficiently produced.
The future of AI in law will not be decided by whether lawyers continue to use these tools. They will. The real question is whether legal institutions can use them without surrendering the pluralism, contestability, and corrective openness that make law more than a machine for repeating its own past.
The hidden danger is not that AI will make law artificial.
It is that shared AI may make law too seamless to correct.
Reference list
Aristotle. Politics.
Bibó, István. Válogatott tanulmányok.
California Judicial Branch. “Rule 10.430. Generative artificial intelligence use policies.” Effective September 1, 2025.
Illinois Supreme Court. Policy on Artificial Intelligence. Effective January 1, 2025.
New York State Unified Court System. Interim Policy on the Use of Artificial Intelligence. Effective October 2025.
Pokol, Béla. The Theory of Legal Stratification: Interactions between Layers of Legal Systems.
Shumailov, Ilia, et al. “AI models collapse when trained on recursively generated data.” Nature 631 (2024).
Wendt, Alexander. Social Theory of International Politics.
Weizsäcker, Carl Friedrich von. Information and Structure in Systems: A Philosophical Inquiry.

