By guest author Christopher Davidson.
There are a thousand and one articles being written at any given moment concerning the impending dangers of AI, and not a few of these articles are written by computers. This is another addition — one of those few written entirely by a human being — to that growing corpus.
First, a disclaimer: I am not a lawyer. I do not have a legal education. Like many of our people, my knowledge in this sphere is primarily of an autodidactic nature.
Today’s subject is one of a wide-ranging nature which is, unlike much of the AI fearmongering that flits across our devices, a danger almost certain to materialize over the course of the next decade: not because of the particularly advanced nature of these machines, but because of the magical thinking of the Boomer legal class combined with younger generations’ ever-increasing adoption of AI as a research and writing tool (if not entirely as a research and writing substitute).
Background: The Common Law
The legal system of the Federal Government and the State governments beneath it is established on the legal foundation of the Common Law. A minority system on the global stage, the Common Law comes from the Anglo-Saxon, Teutonic roots of the English (and, by extension, the American) people. It is an essentially tribal, parochial system, premised on the idea that the administration of justice is made legitimate by the precedent of the community in question.
Today, all legal argumentation must be accompanied by appeals to precedent. These appeals take the form of citations to the prior legal decisions of the communities in question. In effect, the whole system operates by means of an ongoing chain of recursive and analogical reasoning: find a prior case like the present case, apply the rule of the old case to the new case, and thus reinforce the old rule by its new application.
The Common Law is always developing. Precedent can be (and often is) applied in ways that substantially alter the practical effect of that precedent. It’s a game of Telephone played by a diverse chain of thousands of the ideologically motivated political appointees that we call “judges.” In prior ages, this development was hindered by the limited flow of information: a baron acting as judge in 11th-century York, for instance, could not be influenced by the decisions of his peers in Cornwall or Shrewsbury for the simple reason that he had no idea what they were doing.
In the modern day, the advent of well-stocked law libraries combined with the endless breadth of the Internet has given every judge and lawyer unbridled access to every case ever tried on any given legal subject. This universalization of a system calibrated for a localist scope has caused considerable damage to the integrity of our Common Law system, in that it has resulted in a stifling constraining of the ability of judges, lawyers, and jurors to administer justice. The availability of precedential information forces these people to hold to and enforce a narrow, black-and-white reading of the law. The flexibility that was the premise and hallmark of the Common Law was put to siege by its newfound universal application.
Enter AI.
Automated Intelligence has for a few years now been in the hands of lawyers and applied by law firms and legal research groups for the purpose of locating relevant case law and writing legal arguments.
There have been plenty of embarrassing moments that have come out of the legal profession’s encounter with AI. One lawyer infamously included several made-up cases in his brief, which cases he claimed were written by several real-life judges. He was forced by the court to write a letter to every cited judge to let them all know which words he had put into their mouths, and was subject to a malpractice lawsuit. He lost his job, reputation, and shirt all in one go. In another incident, a judge had apparently been giving sentences to criminals based solely on the suggestions that an experimental legal AI was giving him.1
Other examples abound. These errors, while terrifying on account of the user incompetence that underlies them, are nevertheless merely instances of those common misapplications of AI technology that have been replicated in just about any professional field today.
Such incidents are not the subject of this article. Our question is much more interesting: What happens when AI is used well by legal professionals?
Many lawyers and law firms have been using AI generation to help write their briefs, appeals, motions, and memos. In fact, using AI to find and brief relevant cases is rapidly becoming the industry standard, and the legal research tools that most law firms and courthouses use have universally changed their search algorithms to be reliant on AI technology. Most big-time lawyers are therefore going to court to try cases based on precedent entirely discovered by AI and merely curated by the human on the other side of the screen.2
Many judges when trying a case are therefore made to adjudicate between two different views of the Common Law generated primarily by AI. When the judge makes his decision, he will cite the cases placed by the AI into the lawyers’ respective briefs, and choose which set of reasons (which reasons, again, are generated by AI) he finds more persuasive for ruling one way or the other. The document he creates, called a “judicial opinion,” combines all these AI arguments and AI citations, and is published on the Internet, where it becomes a binding piece of precedent, read by AI programs which feed it to lawyers, who cite it in briefs, which briefs are incorporated into later judicial opinions.
Anybody who knows anything about AI can stop reading this article here. The picture should be clear enough.
AI — A Primer
For those who don’t know: go to Grok or ChatGPT or some other app and ask the AI to draw you a picture of “the letter x.” When it gives you the result, say, “Take this picture, and make it even more like the letter x.” Continue to ask it to make the picture look more like “the letter x.” After about four or five rounds of this, the picture on your screen will be entirely unintelligible, wholly unrelated to the letter in question. It will have no resemblance to anything created by a human.
Why does this happen?
An AI is only capable of acting in a useful manner when it is trained on examples of human speech and human interaction. When an AI is trained based on its own output or on the output of other machines, it rapidly disintegrates in the face of an increasingly non-human positive feedback loop. This is why various AI programs have deteriorated in their ability to mimic human speech over the past three or so months — with the massive proliferation of AIs and Indians writing things on the Internet, AI programs are increasingly being trained based on the writing and behavior of non-human subjects.
The Problem Illuminated
Apply this to the Common Law, and the problem is clear. Judge-made law is replaced by or melded with computer-generated law, and this new precedent is fed back into the algorithms. The algorithms get a hold of a document these algorithms generated, and thus begin to learn not from humans, but from machines. The Common Law is flexible by design — flexible enough for a prudent human judge to bend law towards justice — but this flexibility is turned by algorithms into brutal and unsparing expressions of inhuman codes, spiraling ever further out of human control or intervention.
There will come a point, should this trend continue unabated, that the law will be such that the majority of binding case law is created by automated intelligence under the guise of lawyers and judges. As this happens, even lawyers who refrain from using these programs will have to cite the output of these programs in their work. The field of law will become a product of mere algorithm.
This is all to say that the law is going mad. It is going mad because humans are increasingly cutting themselves out of the loop in favor of the efficiency and speed that comes with outsourcing thought and research to bespoke machines. The fact is simply that a lawyer who uses AI to accelerate his work is simply more effective than a lawyer who does not.
This removal of the human element is occurring to a greater or lesser extent in every professional and non-professional field. I once witnessed a co-worker of mine who prides himself as being the editor of a well-known poetry journal ask a machine to “give me five prompts for good poems,” before adding his favorite of these prompts to a list; some prompts were marked with a check to remind himself he wrote that poem; others no checks, for he had yet to write them. Some had two checks: the poems he both wrote and published. Most crucial to his writing process? He submitted his poems to the AI, and asked it to improve them.
While the consequences of AI poetry prompts are limited to the creation of slop poetry, judges and lawyers use these programs to create slop law.
The core element of slop law — as with any other slop — is meaninglessness. Slop law is devoid of any goals of justice, any goals of human flourishing. It is not premised on any good or bad legal theory. It does not care for Christianity or Critical Race Theory. It is not essentially woke or essentially objective or essentially anything.
Slop poetry may hurt one’s aesthetic sense and lead a whole generation to dismiss poetry as being gay3 — and this is a problem, make no mistake — but slop law puts people in prison. It fines people. It assigns the custody of children. It criminalizes some actions, and decriminalizes others. It does all this without any reference to the objective good or evil of the actions in question. One can’t even point to an evil motivation that drives it all; it’s simply regulation without meaning, rules without life, law without justice, vacuous nonsense backed by the threat of state violence. Perfect chaos: at least anarchy has a logic to it.
What Do We Do?
The point of this article is merely that you, dear reader, should be advised. This is coming, and the few lawyers out there who care to keep an eye on it know that it won’t be stopped. Fully automated, entirely pointless anarchotyranny. A disinterested and cold insanity, relentlessly rolled out with the irrefutable force of algorithmic necessity paired with a perfect and complete lack of will.
With Trump blasting AI regulations to bits as a key piece of patronage to his Silicon Valley flank, there is no obvious way to prevent this future. The zeitgeist shifts, and we need to prepare however we can.
This is yet another reason — another drop in that massive, swelling bucket — to get organized. Know people. Get offline. Go to church, join the OGC, and meet lawyers who know the game and who can protect you and your family should Leviathan or Behemoth come knocking. Our time for meeting people on the Internet is closing. The whole future of socializing on the Internet can be gathered from a single example: the dating apps are running ads begging their users to meet in person.
Day after day, the smelly, silicon fist of the jeet-AI axis is making the Internet simultaneously more unusable and more dangerous. If you are spending hours scrolling on Twitter, if you are running in circles waiting for dopamine hits as you wait for the next Current Thing to stream by your face in all sorts of pretty and distracting colors, you are wasting your time. If there was ever a time to spend one’s life surfing the algorithm, that time is not now.
It’s a New Day in America: a day for building, for organizing — for making the commitments we will spend the rest of our lives fulfilling. Now is the time to circle up them wagons, to take what ground we can, to commit ourselves to Christ, and to reintroduce our hearts to what is real.
Deo Vindice — I’ll see you on the ground,
Christopher Davidson
This judge still tries cases, and, to the author’s knowledge, still uses AI for criminal sentencing.
Much of the time, the “human” is a paralegal or first-year law school grad who wrote they/them’s cover letter with ChatGPT. Look at surveys of who big law firms hire. This is not a joke.
We will only know this problem has been solved when our friends over at Double Dealer start publishing poetry.
I'm a civil litigator. I draft and argue motions, go to trial, take depositions, etc. I have found AI most helpful in shortening the time required to find the cases I need (e.g., 10 minutes instead of an hour). As far as I can tell, it's finding me the same cases I would have otherwise found.
Also, the AI doesn't write well enough for me to use it for writing without it being counterproductive. I know people who use it for document review, but I don't do personal injury or product liability work, so I don't typically have tens of thousands of pages to review.
Also, I believe Westlaw (my primary research tool) operates in a somewhat closed manner (i.e., they aren't training on slop or jeets). All it's really done is reduce reliance on boolean operators.
Other than some outrageous examples, I'm pretty happy with the current status quo in the legal industry. I am, of course, extremely bias because it's working out quite well for me. In my opinion, the primary issue with the legal industry is the hoards of childless women, with unlimited time on their hands, who are intent on conquering every legal institution.
Heed the author's disclaimer, all ye who seek insight into what is happening with AI and the law. If you want AI-doomer titillation then this article is for you.
There are many logical leaps based on incorrect premises.
1. "The slop recursion doom loop": Legal research AIs that attorneys pay for are not training on jeetslop.
2. "AI-generated briefs with their AI citations will force judges to adopt the AI's view of the law": Judges are not bound to decide cases only on the precedent cited by the parties.
3. "AI will narrow the wiggle room for justice provided by the common law through the AI's ability to find cases that are so on-point that the judges will be bound to follow this previously undiscovered case law": Not all cases have precedential effect. Trial court cases have essentially zero precedential effect on other trial court cases. They may be persuasive but they do not restrict a judge's discretion.
Fun thought experiment, but it's as useful as poetry if you want to learn something.