Editor’s note: Viktor Wang, CSUSB professor of education, leadership and technology, wrote the following article explaining how he enlisted artificial intelligence in a court case in which he was involved.
When I was sued in a civil dispute, I faced a decision familiar to many Americans: hire a lawyer at significant cost or attempt to navigate the legal system alone.
Civil litigation is expensive. Even an early-stage defense can require tens of thousands of dollars. For many people, that financial pressure alone can shape decisions long before a judge considers the merits.
I chose to represent myself. But I did not choose to proceed uninformed.
Instead, I used artificial intelligence tools — not as a substitute for a lawyer, but as a research and analytical aid. What I discovered was not that AI replaces legal expertise. It does not. What it can do, however, is reduce the informational disadvantage that often defines self-representation.
In plain terms: It can help a person understand what a court will require — law, evidence, and procedure — before spending significant time and money.
A more detailed peer-reviewed, open-access article of this experience was published in the Journal of Leadership, Accountability and Ethics, Vol. 22 No. 2, under the title, “When the Flood Wasn’t Real: How a Tenured Professor Fought Back Against a Manufactured Lawsuit: Legal Ethics, Self-Representation, and the Weaponization of Civil Procedure.”
The structural disadvantage of going it alone
A significant share of civil litigants in the United States appear in court without attorneys, particularly in housing, family and lower-value civil disputes. Yet the legal system is built around professional fluency. Procedural rules, evidentiary standards and motion practice assume training.
The result is predictable. Parties with counsel generally perform better — not necessarily because their claims are stronger, but because they understand how to frame arguments, cite precedent and satisfy technical requirements.
For a self-represented defendant, the most daunting challenge is often not the underlying facts. It is understanding the architecture of the case: What must be proven? What constitutes admissible evidence? What is required to defeat a motion for summary judgment? Where does rhetoric end and legal sufficiency begin?
This is where AI tools proved unexpectedly useful.
AI as an analytical scaffold
Used responsibly, AI systems can accelerate legal research, summarize case law, compare pleading standards to factual allegations, and help test the logical coherence of an argument. They compress time and surface patterns that might otherwise take days to uncover.
In my case, I used AI to review statutes and procedural rules, map the elements of asserted claims against the evidentiary record, and refine written motions. Every citation required independent verification. Every factual claim had to be confirmed against actual documents. AI-generated material was treated as a draft, not as authority.
That distinction is critical. Courts have already sanctioned attorneys who relied uncritically on fabricated AI citations. The technology can hallucinate cases, misstate holdings and oversimplify complex doctrines. Used carelessly, it is risky.
Used carefully, it can function as a structured thinking partner — an analytical scaffold rather than a decision-maker.
AI was also helpful in preparing for hearings. By simulating potential judicial questions and stress-testing responses against the record, I was able to translate written arguments into concise oral explanations. This was not about scripting speeches. It was about discipline and preparation.
For self-represented litigants, preparation often determines whether arguments are coherent or scattered. AI can help structure that preparation, provided the user remains accountable for accuracy.
The limits and risks
None of this suggests that AI replaces trained counsel. Complex litigation, evidentiary disputes and appellate practice require professional expertise. Nor does AI eliminate strategic blind spots. Overconfidence is a genuine hazard. So is misunderstanding nuance in precedent.
Transparency is essential. Litigants who use AI tools must verify citations, comply with court disclosure requirements, and recognize that the ultimate responsibility for accuracy remains human.
There is also a broader equity concern. Access to AI tools and digital literacy is uneven. If technological fluency becomes a prerequisite for meaningful participation in civil justice, new disparities could emerge even as old ones narrow.
AI is not a magic equalizer. It is an amplifier. It magnifies the discipline — or carelessness — of its user.
And it cannot create admissible evidence. That limitation is a feature, not a bug: it keeps the center of gravity on proof.
A shifting access-to-justice equation
Still, the broader implications are significant. For generations, access to civil justice has depended heavily on access to capital. Lawyers cost money. Time costs money. Procedure costs money.
AI does not remove those costs. But it lowers the price of preliminary understanding. It allows individuals to evaluate the basic legal sufficiency of claims, identify jurisdictional or evidentiary issues, and make more informed decisions before committing substantial resources.
For many people, that early clarity matters: it helps them decide whether to seek counsel, what questions to ask, and what documents to gather.
In my own case, the litigation remains within the formal court process, and its outcome will depend on judicial evaluation of the evidence and law. But regardless of the result, the experience revealed something larger: the informational barrier that once made self-representation nearly insurmountable is no longer as absolute as it once was.
Artificial intelligence does not argue cases. People do. It does not weigh credibility. Judges do. It does not bear ethical responsibility. Lawyers and litigants do.
But when used cautiously and transparently, it can help individuals think more clearly about the legal standards they face and the evidence required to meet them.
Civil justice should turn on law and proof — not solely on who can afford the longest runway of litigation. If AI tools help narrow that gap, even modestly, they may become one of the more consequential access-to-justice developments of this decade.
Technology will not replace lawyers. It may, however, reshape who feels capable of standing in a courtroom at all.
If that shift increases lawful, evidence-based participation — without encouraging abuse or shortcuts — then it strengthens the system rather than undermining it.