Skip to content Skip to footer

The Dangers of Consumer Level AI Tools in Litigation

We keep talking about hallucinations. Fake cases. Bad citations. The fear is that lawyers and consumers will rely on AI, get the law wrong, and lose in court as a result. The superficial use of AI is real and causes more problems in family law cases than it helps. But this is a fixable problem.

You can avoid fake law by doing what lawyers are already required to do: read what you cite and shepardize the work using legal AI tools available to lawyers. At Lewellen Family Law Group, we use specialized legal AI tools that are enterprise level for reducing human error, streamlining drafting, analyzing voluminous materials, and preparing for negotiations and court proceedings, so that we provide a competitive advantage to our clients.

The harder problem, and the one that is quietly becoming far more important to clients, is that unwary lawyers and clients are likely creating discoverable evidence every time they use consumer AI tools.  Most of them do not realize it. Once you understand that, the rest of the analysis follows pretty quickly.

The real distinction is not AI versus no AI. It is consumer AI versus enterprise AI, and whether the information being entered into these systems is actually protected or effectively being disclosed.

Courts are not struggling with this. They are applying existing rules. And those rules are not particularly forgiving.

The issue is not just that AI can be wrong. It is that, in many cases, using consumer AI is functionally no different than sharing information with a third party.

The Obvious Problem Everyone Focuses On

The starting point for most discussions is still Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). In that case, attorneys filed a brief citing judicial decisions that did not exist. The cases were generated by an AI tool, and no one verified them before filing. This case is not authoritative in California family law cases, but it is demonstrative of where California courts are headed.

The court did not treat this as a technology issue. It treated it as a Rule 11 problem.

Attorneys have an affirmative duty to conduct a reasonable inquiry into the law before filing anything. That duty does not change because a tool produces something that looks polished. A fake case is not a weak argument. It is not law at all.

The court sanctioned the attorneys, and more importantly, emphasized that lawyers act as gatekeepers. You can use tools, but you cannot outsource judgment.

California followed quickly. In Noland v. Land of the Free, L.P., 114 Cal. App. 5th 426 (2025), an appellate brief contained 23 quotations, 21 of which were fabricated by generative AI. The Court of Appeal published the opinion specifically to make the point that should not need to be said: if you cite a case, you are expected to have read it.

These cases matter. But they are also the low-hanging fruit. They are obvious failures, and they are preventable.

The profession will adjust to that.

Reasonable Expectation of Privacy and Discoverability

The more consequential issue is not accuracy. It is confidentiality and thus discoverability in a family law case.

Lawyers are used to thinking of drafting as a private process. You test arguments, explore facts, and refine strategy in a space that feels internal. Historically, that intuition has been correct. Likewise, parties in a family law case are using AI to help them strategize and learn their rights, to some extent, to reduce the need for an attorney. They treat their electronic devices as “private” and expect it to remain confidential. How many of you are inputting “facts” and uploaded documents into ChatGPT, Perplexity, or Claude, to determine whether you may have any legal rights under those facts and records.

With Consumer AI tools, you type in facts, ask questions, and receive something that looks like attorney work product. It feels like you are thinking out loud in a private workspace.

You are not.

That distinction is no longer theoretical. In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), the court addressed whether materials created through a publicly available AI platform were protected by attorney-client privilege or the work product doctrine.

The answer was no.

The court’s reasoning was straightforward and, frankly, inevitable. Communications with a public AI system are not communications with counsel. They are not inherently confidential. And to the extent information is shared with the platform, it is shared with a third party.

That last point matters most. Once information is disclosed outside a protected relationship, privilege is either lost or never attaches in the first place.

The court went even further and made clear that even if the user inputs information originally learned from counsel, sharing that information with a public AI tool can waive privilege.

That is not a subtle shift. It is a structural one.

Why “Consumer vs. Enterprise” Actually Matters

This is where most consumers misunderstand the issue. The difference between consumer and enterprise AI is not branding or marketing. It is legal risk.

Consumer tools are built for broad public use. They often retain inputs, process them externally, and operate under terms that do not guarantee confidentiality in any legally meaningful sense. From a litigation perspective, that means anything entered into those systems may later exist as discoverable data.

Enterprise systems, by contrast, are designed with contractual confidentiality, restricted data use, and controlled environments. When properly implemented, they can function more like other secure legal technologies that lawyers already rely on.

But even that distinction has limits. Enterprise tools reduce risk. They do not eliminate it. Lawyers still need to understand how the system works, what data is stored, and whether anything is shared beyond the platform.

For consumers, it is likely that because their AI tool is not an attorney, the attorney work product/attorney-client privileges do not likely apply to protect the data. Consumers should also know that in a family law case, even if something is subject to the Constitutional Right of Privacy, does not mean it is shielded from discovery.

The key point is that courts are not creating new rules for AI. They are applying existing ones. If you disclose information to a third party, you should assume it may be discoverable. 

In other words, the risk is not just discoverability. It is that privilege may never attach at all, or may be waived by the disclosure itself.

AI Chats Are Likely Just Another Category of Evidence

Once you look at it that way, the discovery implications become ominous.

AI interactions are just another form of electronically stored information. They sit alongside emails, text messages, and internal notes. If they are relevant, they are potentially discoverable.

In some cases, they may be more revealing than traditional evidence. AI chats often capture how someone was thinking in real time. They can include summaries of events, admissions, evolving legal theories, or attempts to frame a narrative.

That is exactly the type of material opposing counsel will want.

And unlike a rough draft that never leaves a lawyer’s computer, these interactions may exist on third-party systems with their own retention policies.

Preservation Obligations Now Include AI

This is where practice has not caught up yet.

Once litigation is reasonably anticipated, parties have a duty to preserve relevant electronically stored information. There is no exception for AI chats, prompts, or outputs. If anything, those materials should be presumed discoverable unless there is a clear reason otherwise.

It is no longer enough to tell clients to preserve emails and text messages. They should also be instructed to preserve any AI-related materials, including chat histories, prompts, and outputs generated in connection with the dispute.

Just as importantly, they need to be told not to delete or “clean up” those interactions. The instinct to treat AI chats as disposable is understandable, but it creates real spoliation risk if the data turns out to be relevant.

This is not a new legal obligation. It is an existing one applied to a new category of data.

The Takeaway Consumer Litigants Should Focus On

None of this means you should avoid AI. Used properly, these tools can improve efficiency and help manage costs in ways that are difficult to ignore.

But the way many lawyers and clients are currently using AI, particularly through consumer platforms, is creating a parallel record of communication that is possibly neither privileged nor protected.

That record may ultimately become evidence.

The law has not changed. Courts are applying the same principles they always have. What has changed is how easily people can create a detailed, time-stamped record of their own thoughts, strategies, and narratives without realizing it.

And in litigation, those records rarely stay private.

At Lewellen Family Law Group, we have enterprise level legal AI that is subject to confidentiality and privilege. We invest in this cost because it improves the client experience and the quality of the legal services.