Episode #120: AI’s Role in Family Law

For Better, Worse, Or Divorce Podcast

In this episode, Jake Gilbreath and Brian Walters discuss the growing role of artificial intelligence in family law. They break down how AI is being used in legal research, document drafting, mediation and financial discovery – and where it may (or may not) replace traditional legal practices. Jake and Brian wrap up the episode by touching on the ethical responsibilities outlined in a recent Texas Ethics Opinion, including the importance of technological competence, protecting client confidentiality and ensuring accuracy.

Available on: Apple Podcasts Spotify

Your hosts have earned a reputation as fierce and effective advocates inside and outside of the courtroom. Our partners are experienced trial attorneys who’ve been board-certified in family law by the Texas Board of Legal Specialization.

Jake Gilbreath: Well, thanks for tuning into the For Better, Worse, or Divorce podcast. This is the podcast where we provide you with tips and insight, how to navigate divorce and child custody situations, particularly in the state of Texas. I’m Jake Gilbreath. I’m joined by my partner Brian Walters. Today we are going to talk about AI, it’s of course the hot topic these days, and specifically how AI has evolved in its interplay with law, and particularly family law, and just our own personal experience and what we’re seeing from our clients. And then we will turn from that discussion to talk about a recent ethics opinion that the State Bar of Texas issued concerning the use of AI in law.

Brian and I are the first ones to tell you that we’re by no means experts in this. We are always curious. We try very hard to utilize technology in our law firm. We’ve been like that since we first started partnering up, looking for different ways to leverage technology, make things more efficient for our clients, for our staff. We think there’s a better work product that way, and it saves the clients time, effort, and money if you do it that way. That’s our general philosophy, and now that AI is becoming more of an issue and we see more of its use in our day-to-day lives, we thought we’d just address it on the podcast.

Brian, I guess sort of starting, just speaking colloquially, how are you seeing AI turn up in your cases? Let’s just talk about it from the perspective of potential clients or your clients that you’re representing, and how you see them using it or asking you about it.

 Brian Walters: Yeah, it does come up. I’ll tell you, I was a little skeptical of the whole thing at the beginning when it came out. I thought it might just be another one of those kinds of tech manias that happened that went away, but it’s become, I think, in several ways, more common with what I deal with. First of all, clients who’ve always been intelligent and done research on their own, whether that was in the pre-Internet days, probably meant talking to their cousin who got divorced, and then the internet days just kind of Googling things, now they’ll put the same questions through AI. They’ll put a question of, “Hey, if I’m a dad who does blah, blah, blah in Texas, what’s likely to happen in my custody case,” or something like that. So you get more sophisticated and informed clients.

I think, generally, it’s better information than just Googling it. We don’t get a lot of the common misconceptions as often, it seems like, that I used to with just when people would Google things about, “I want 50-50 or alimony,” that kind of stuff that doesn’t apply in Texas and the real world. I’ve had some clients who’ve even taken it a couple of steps further. I had one the other day, a potential client, who was putting their messages between them and her co-parent, her ex-husband, into AI and asking it to analyze his personality and her personality and then to suggest good ways to respond to it. It was really interesting. She showed me the outputs from it. It was fascinating. I don’t think AI can practice medicine, but it certainly had some opinions about each one of their psychological and psychiatric conditions, but most importantly, really suggested very, very good ways for her to respond to their particular personality. That’s the kind of thing I’ve started to see, just even as of a week or two ago.

Jake Gilbreath: Yeah, no, I’m the same way. I think you’re right. It comes in various levels of usage. The most basic of the “Hey, I ran this through Grok or ChatGPT,” or whatever. “Here’s what it says.” I have a lot of potential clients doing that. I have a lot of current clients doing that. I’ve been impressed with the clients, or the clients that we have and potential clients, the way they’ve used it and they’re understanding the limitations of relying on an AI answer. It’s like you said, it seems to me like 10 years ago or 15 years ago, when people would come in and they’d say, “Well, I Googled it.” But most people, there’s always the exception, but most people would say, “I’ve Googled it. I understand this is just what Google said. What do you think?” Right? They used it. They had a great deal of skepticism about what they read, but it was a way to sort of, I don’t know the best way to put it, grease the wheels for the conversation.

Somebody’s coming in and they’re starting to understand the terminology. They’re understanding what other people are saying or what ChatGPT has said or what have you, and they’re just in a better position to have a more productive conversation with their lawyer, which I think is great. For consults, for example, it makes for a much more efficient consultation. If you’ve gone through either Google or through AI, talked to one of the AI agents and said, “What’s conservatorship? What’s possession access? What’s child support? How’s that set?” And understanding that there may be inaccuracies in what you’ve gotten from AI, but you understand how to have the conversation. So you’re almost like kicking the tires on what you’ve learned online. I see people, frankly, use it too to, no different than I would in my personal life or the way I do use it in my personal life, to use it as a tool to see if the professional that they’re talking to really lines up or, I guess, if the person they’re talking to has the competency they’re looking for.

Again, no different than Google. If I Google something on what a plumber should do to fix the leak in my faucet, and some plumber comes over and he or she says something completely different from what I’ve read online, maybe this plumber’s right, maybe they don’t know what they’re doing, and everybody online is right. I think I see people using the same with AI. If you ask AI something and the lawyer that you’re talking to is just totally off in a different universe than AI is, one of them’s wrong, and it may be the lawyer, or it may not be. I think as lawyers, the way to approach that is we have to be prepared to talk about, one, we don’t need to get our feelings hurt. I see a lot of lawyers do that, get offended when people Google things or ask AI. I think we need to be prepared to say, “Yes, but here’s the nuance.” Or, “Yes, but here’s where it’s wrong. Here’s where it’s right,” and use it to build off of with our advice, because it often is a really good starting point.

This is random, just came off the top of my head, but I’m going to use this as an analogy. My brother-in-law, my sister’s husband, is a phenomenal artist. He does it professionally. That’s what he does. I always say my sister’s a phenomenal artist and that’s what they do professionally. And she was talking about various artists that will sometimes, if they’re doing work for a business, will start with AI asking for the concept, and then they take it and make it their own, but it kind of helps inspire their work by just running something through AI. I think a lot of it is kind of the same for us. It’s a starting point. I’m not going to take that product and just put a rubber stamp on it, but I’m going to take that product and talk about ways that it’s helpful and ways that it could be wrong. We’ll talk about limitations in a second. But there are limitations to AI.

And then, like you said, Brian, I see clients. So that’s kind of the basic, I see. And then I see clients using it and I know we in the office as well, use it as a tool for communication. It can be helpful with communication. It can be helpful for discovery. I had a case where a client, this is a year ago, and I was way more skeptical back then about AI. But the other side had produced some text messages between the other side and another witness, and it had to be like 15,000 pages of text messages between the two individuals. And I think, thinking, who on earth is going to read this? Certainly not the lawyers. That would cost $100,000 just to read. And the client ran it through AI and asked … I forget which program he used, but asked it various questions and found some really good text messages that we would’ve never found just because of the sheer volume of things; I would’ve never had time dig out two or three really, really relevant text messages that he was able to find.

It’s like that with bank statements. It’s like that with, really, all financial. And again, I think as lawyers, and I want to get your thoughts on this, Brian, but I think as lawyers we need to embrace that stuff. I think it’s just the internet or all things technology. We see a lot of people in our profession not just shun away from it, but kind of reject it and try to minimize or eliminate the usage of technology in what we do. What are your thoughts on that?

 Brian Walters: Yeah, I agree with you. That’s a great example. Financial statements, another, “Hey, find me all the cash withdrawals over $500,” or something like that, all that type of thing. Yeah, there are often thousands of pages or thousands of messages, or hundreds of pages or thousands of pages of documents, and it is prohibitive for us to go through them line by line. As attorneys, you’re hesitant to delegate it to a non-attorney staff member. Your client often doesn’t understand it or know what to look for, or might be looking for something different. And so that’s a great way to go through it. I think that’s an excellent idea.

Jake Gilbreath: Well, and then speaking about what you’re talking about, Brian, with the asking about custody and what’s going to happen in court and stuff like that, there are limitations, I think, on this and this school; it kind of brings us to the ethics opinion in just a second. But I think we do have to counsel clients or tell clients, that I think is a limitation of AI, and again, just taking the most common example of ChatGPT, is that ChatGPT will give you an answer. No matter what. If you ask it a question, it will spit out an answer, and it’ll spit out a very confident answer. And it’s not always right. It’s hard to tell that it’s not right because it does speak so confidently. I think the terminology is though, again, you and I aren’t experts on this, but the terminology is, it will hallucinate. It will hallucinate and come up with incorrect information, and it is constantly getting better, but it’s not always right.

And so I think it’s better used, it’s my current thinking, is that AI is better used in conjunction with a lawyer. Maybe the use of AI helps you prepare for your meeting with your lawyer, helps guide the conversation, gets you ready for the conversation, makes it more efficient, but again, understanding that the information that you get. I mean, my wife knows this because I’ve spent way too much time doing this, but you can get incorrect information from ChatGPT or Grok or any of them. If you ask it long enough, enough different legal questions, complex legal questions, it will eventually get the wrong answer. If you don’t know, if you’re not doing this for a living, then it’ll just seem like every other answer that you’re getting.

So the way I sort of am currently describing it to clients is I think, and I don’t know the percentage, so I’m just making this up, but I always tell people AI is probably … it’s accurate, it’s correct 80% of the time, and 20% of the time it’s wrong to devastating effect. If that 20%, or call it 10% or call it 5%, it doesn’t matter, when it’s wrong, it’s wrong in a big way, and it can really affect things. And so that just goes back to why it’s so important to use it as a tool. Work with your lawyer. And why it’s so important as lawyers to embrace the part of it that is right, it is correct, and embrace the part that’s helping guide our clients in making what we do more efficient and a better product for our clients, but at the same time, understanding that there are limitations and it’s our role as the professionals, just like the doctor, to come in and address the imperfections in AI, so you don’t have those devastating effects. Let’s say we should address this in other podcasts.

There’s a news story I was reading the other day on About the Law, where I think there’s an AI company, this is in federal court, that has dedicated to representing themselves in whatever lawsuit they have 100% by AI. All their pleadings are 100% by AI. Not using lawyers and stuff. The website had posted the news story had posted the pleading that AI had written, and everybody was having a good laugh at the errors that it had. But I read it and it’s like, “Well, 90% of this is pretty good. Pretty impressive. And good points and good arguments.” And then there’s that 10%, “You did just admit to liability or cause the whole lawsuit to go down the drain just with this one paragraph right here.” But it’s a good start. So that’s where we’re sitting right here, and I think we’re recording this in April 2025. Who knows where we’ll be in a year? That’s where I think we’re at right now.

 Brian Walters: In Texas family law, I think there are two issues that I see become problems. One is, family law is state-specific. Let’s say that I was getting divorced, and maybe I had an immigration issue related to a work visa and I was getting divorced. If I just ask them, “Hey, I’m going to get divorced with this kind of work visa, and by the way, what’s my alimony situation going to be?” The answer is Immigration law is federal, so maybe they’ll get the right answer, but it would be correct where it would depend on what state you are in for the alimony question. So I think that’s important, and I do see people bring those kinds of things in sometimes of not distinguishing between Texas and not.

And then the other part of it, and I think this is where lawyers are really important, and I think AI would really struggle, is the difference between the law as it’s written and the practice of it in a courtroom. A classic example is child support, where there are 22 or 23 factors that the court can use to adjust child support off the guidelines. But as a practical matter, that almost never happens except, I think, if you have a very, very truly disabled child and one parent has a lot of money or a lot of income. So you might get an answer that says, “Well, child support, there’s a guideline, but it can be changed for all these reasons.” And then people think, “Oh, well, I’ll get it changed.” And they come to meet us and, like, “Well, not really.” Those are the kind of things, but I think together they can work. The client can come in with a basic understanding of that concept, and then we can say, “Well, here’s what really happens,” and then therefore they can get to the right answer.

Jake Gilbreath: Hopefully in a much more efficient way. So I guess the summary of that it’s great. We encourage it. We are continuing, just like we approach all things with technology in our firm, to explore it and utilize it in a way that benefits our clients. There are ethics surrounding it. The State Bar of Texas, the Ethics Committee, actually issued an opinion back in February 2025. Let’s see. It’s Opinion Number 705 entitled What Ethical Issues Are Raised Under Texas Disciplinary Rules of Professional Conduct by a Lawyer’s Use of Generative Artificial Intelligence in the Practice of Law. That’s a long title, classically lawyerly written rather than just saying, what are the ethical issues by use of AI? But okay, long title. To summarize, it’s an interesting opinion and you can find it online. It’s, I think, six pages single-spaced. As I read it, it kind of makes three or four, maybe five, five points to it, maybe in a longer manner than it needed to be.

But essentially, it says, obviously, Texas lawyers need to educate themselves about AI and ethical issues that may arise. Ethical issues, particularly about sharing client information and protecting client information, is very important, obviously, as lawyers. We should, and The Opinion says that lawyers should acquire basic technological competence before using any AI tool. I think that’s kind of all things, but that’s particularly true of AI. Obviously need to make sure, like we talked about, that the AI doesn’t imperil our confidential client information. This is what we’ve been talking about the last 20 minutes. Essentially, we need to verify the accuracy of AI. That’s largely what we’re called on to do when clients are using AI is that we’re verifying accuracy and adding to and correcting it where appropriate.

And then lastly, this is important. I think The Opinion talks about that to the extent that you’ve saved time for a client by using the tool, which we often do. You can’t charge the client for what it would’ve cost without AI. So, back to that text message or bank statement example you were giving, Brian, where I can’t use AI to go through 10,000 pages worth of financial information, and it now only takes me 5 minutes, or my staff 5 to 10 minutes, and then charge the client as if it took us 10 hours. I think that, you wish would go without saying, but the State Bar Ethics Committee did make sure to point that out to us lawyers. And that’s the case.

And so, I guess, I’ll use that to wrap up the conversation, is, that’s kind of the whole point of AI, us using AI as lawyers, is we should always be looking for a more efficient, better product for our clients. Because this is supposed to be a client-focused service that we provide, and if AI can help the clients have a better product with less of our time, that’s better for everybody. And so we need to be really aware of that. I know we are at our firm. It’s a really interesting topic, and we’ll continue to explore it as time goes on.

Let’s wrap up with that. That’s all we have for today. If there’s a topic out there that you would like us to discuss on the podcast, or if you’re interested in speaking to anybody on our legal team about your situation, please email us at podcast@waltersgilbreath.com. You can also find us online at waltersgilbreath.com. I’m Jake Gilbreath. I’m joined by my law partner, Brian Walters, and we appreciate you all listening.

For information about the topics covered in today’s episode and more, you can visit our website at waltersgilbreath.com. Thanks for tuning into today’s episode of For Better, Worse, or Divorce, where we host new episodes every first and third Wednesday. Do you have a topic you want discussed or a question for our hosts? Email us at podcast@waltersgilbreath.com. Thanks for listening. Until next time.