Emerging Litigation Podcast

Automation Comes to Our Litigation Nation with James Lee

Tom Hagy Season 1 Episode 86

In this episode we talk about litigation automation, and another case in which innovators are using artificial intelligence to transform legal operations.

We also speak with our guest about his transformation from a litigator to a tech entrepreneur, and how the company he co-founded is using modern tools to do in minutes what used to take hours. These tasks include responding to demand letters, complaints, and discovery requests, and executing matter profiling and data analytics, all of which are traditionally rote and repetitive and time-consuming undertakings.

He is James M. Lee, co-founder and CEO of LegalMation. James conceived the idea behind LegalMation -- which is to leverage the power of generative artificial intelligence to transform litigation and dispute resolution -- while managing a litigation boutique.  An experienced and recognized litigator and trial attorney, James received his J.D. from Stanford Law School. 

Also joining me, I’m pleased to say, is the ever-inquisitive and always attentive Sara Lord, legal analytics professional extraordinaire, who raised questions from the litigator's perspective. 

I hope you enjoy the conversation!

*******

This podcast is the audio companion to the Journal of Emerging Issues in Litigation. The Journal is a collaborative project between HB Litigation Conferences and the vLex Fastcase legal research family, which includes Full Court Press, Law Street Media, and Docket Alarm.

If you have comments, ideas, or wish to participate, please drop me a note at Editor@LitigationConferences.com.

Tom Hagy
Litigation Enthusiast and
Host of the Emerging Litigation Podcast
Home Page
Follow us on LinkedIn
Subscribe on your favorite platform. 


Tom Hagy:

What is your background and what brought you to to L egalMation? W hat's your path there?

James Lee:

I've been a lawyer for close to 30 years, um, and I started my career at Morgan L lewis and quinn emmanuel and then, uh know, five years of being an associate, I started a litigation boutique called LTL Attorneys with a few friends and that boutique grew to about 40 lawyers and we were doing some really cool stuff. We had developed a great reputation for patent defense cases, alternative fee arrangements on business contingency cases. We were doing high profile cases like the Snapchat co-founder dispute case, which was actually quite fascinating. And then, I guess about five, six years ago, I attended a one week program at Harvard Law School with Scott Westfall and David Wilkins I don't know if you know either of those guys, but you should have them on your podcast for sure. But what this one-week program does is it actually invites law firm managing partners and practice heads to essentially learn about all the challenges of managing a law firm, and we're talking about most of the AMLA. 200s are sending their managing partners and practice heads to this program, and one of the days was focused on actually how AI had been infiltrating the medical field and that it was just a matter of time before it was going to.

James Lee:

Matter of time before it was going to sort of penetrate the legal industry. And I was looking around at all the other managing partners and you know I can just see the look on her eyes. But it was both a combination of fear and greed that I saw. And so I, you know, I was fearful. I was fearful that the larger firms would basically steal our lunch on this. And so I went back to my partners and I said, hey, we got to do something about this. And you know, I mean our firm had done a number of software development cases, so we had some in-house expertise and so we just started to experiment on the proper applications of AI at that time and, as a result, legalm nation was sort of born out of that sort of concept where we were looking at ways of automating certain portions of brute force activities and litigation.

Tom Hagy:

Leads to the next question I was asking you know if you can elaborate a little bit more on what the initial problems you were trying to solve and what challenges did you face in trying to solve those?

James Lee:

Well, you know, at that time I'm talking about sort of older AI, machine learning sort of approaches, which are still quite valuable today, by the way, we'll talk about that later.

James Lee:

I'm sure you know, the beauty of these automated approaches or machine approaches is that they do target brute force activities, stuff that people are doing over and over again, very repetitive.

James Lee:

It is stuff that you know, as you guys know, in the practice of law half of the stuff is just procedural, just BS that you just have to get through and that's perfect for AI and machine learning. And so we were asking ourselves, ok, from our perspective as litigators, where can we point this awesome technology to investigators, where can we point this awesome technology to? And you know, our philosophy was well, let's start at the very beginning, which is, when you get a lawsuit, can we answer it, and then can you draft targeted discovery questions based on the allegations of the lawsuit. And so that was sort of our first project. And you know, sure enough, after about three, three months, we were able to get a pretty good alpha version working, to the point where, you know, one of our firm clients was Walmart, and so we went to them and showed them what we had and they were just astounded. And you know Walmart has so much employment and personal injury litigation.

Tom Hagy:

They do yeah.

James Lee:

Yeah, that they were a great first. You know they were our first client. They were our client from a firm perspective, but they were also our legal nation's first customer. So we rolled it out on behalf of their employment and personal injury cases. And so this is work that takes eight hours or so. When you look at a lawsuit and you have to read it, digest it, you have to answer it, think about the affirmative defenses you want to assert, then you want to generate or draft targeted discovery questions. That's work that takes six to eight hours, maybe more, maybe less, but our system we were doing it in two minutes. And when I showed litigators, I mean their jaws would drop and you could just see that they knew the world was going to change at this point.

Tom Hagy:

Right, yeah, Really hit them. You know, and I'm looking at your, at your team, on your website. First of all, why did you shave your beard? Secondly, secondly, why who's who is at LegalMation? Who are the professionals that make up your company?

James Lee:

Yeah, I mean right now there's a group of lawyers. I'm proud to say that we have a number of JDs on our team, and that's always very helpful because they do have some subject matter expertise. They can talk shop. They understand the pain points, you know. They can talk shop. They understand need more data, and you know I would tell them no, you're just going to get the same signal to noise ratio at that point, because what they don't understand is that in law, you know, they somehow think law is a science where there's a right and wrong answer. You know, but, as you guys know, there's ranges of right and there's ranges of wrong, and you have to sort of architect a system that captures these ranges, these gradients of right and wrong, in order to sort of capture and classify properly in order for a model to work. And so I find it immensely helpful to have lawyers on the team and then also on our AI data team.

James Lee:

It's headed by Martin Will, who was actually one of the early Amazon Alexa architects and he was also heading SAP's new venture group for a while, and so we're very fortunate to have him on our team and the people that he manages. We have a development team. They're mostly all US-based, which is important, I think, and they're really good about just maintaining the nuts and bolts of our platform. I think we have a 99.9% uptime, which is really good, so really really stable for the clients that use our system. And then we have a great product and customer success team. They're really really bright, really sharp guys and gals and you know they make sure that the trains are running on time for our, for our clients.

Tom Hagy:

Good, and Sarah, you feel free to chime in. I wanted to, but I'll keep talking, so you can't. I wanted to kind of go through your offerings, not necessarily do a commercial, but I mean, these things sound really fascinating and useful, so let's, we can just go through them one by one, but you tell me if it's repetitive Complaint response. How does this work?

James Lee:

complaint response. How does this work? Well, we make it as easy as possible because we're dealing with lawyers and paralegals. The beauty of our system, I think, is that we provide an end-to-end platform to basically provide a response or an answer. And so if you were to ask me what part of our system is really AI machine learning focused, I'd say 20, 25%.

James Lee:

A lot of the other stuff that we had to develop was really designed to answer a lot of feedback that we got from users, because I think if you make it even a little difficult to use, lawyers and paralegals will revert to their old ways.

James Lee:

There's no doubt. And so on our system, the idea is that you take a lawsuit, a PDF copy of the lawsuit, you drag and drop on our system. There is a confirmation page where we do want you to just confirm spelling from OCR possible OCR issues, bad copies. But after that preliminary step, which takes 15 seconds at most, you just press go and boom. Most you just press go and boom. Now you have a productivity tool where we have the ability to pre-answer all of the allegations based on historical data that a client gives us. So we provide training models for them and then they're able to point and click and then just press go and then they get draft documents in Word or final draft documents that they can just print out to PDF and just file at that point. But you know, we're really trying to expedite the entire sort of workflow in a way that I think makes paralegals and lawyers more, more more likely to use the system.

Tom Hagy:

OK, yeah, ease of use is huge, huge. Ease of use is huge, huge In the crush of a day. I don't think any of us who says, oh, here's some new system, we could use new app or something. You get in there and you're like, oh, I got to do this, I got to do that. You know, if you're 20, 30 minutes in, you still have hell with it. Just move on. But, sarah, that's a good point for you. You were asking me about some of the data behind the platform.

Sara Lord:

Yeah, a couple of questions. You can't really talk about a tool these days that drafts responses, drafts motions, whatever it's writing, without hitting on hallucinations. How do you address the hallucination issue so that lawyers are confident using this tool in the results they're receiving?

James Lee:

Yeah. So I'm very proud to tell you guys that I like to brag that we're never wrong when we provide answers to our clients, because we are only using their training data, their data, to our clients, because we are only using their training data, their data, to essentially build out models that essentially identifies the allegation, comprehends it and then essentially provides them what the response is. Now let me back up and sort of give you more context on that. When we were initially developing our platform, what we discovered is that lawyers think of themselves as artists. This has to go with the gradients of right and wrong.

Sara Lord:

They tend to have art style degrees, I mean.

James Lee:

Yeah, well, actually, between objections that our system was not providing that, he thought that you know it should do some other transition words. And this is where it gets kind of. You know I'm not silly, but you know the reality is that because lawyers want to put their personal touch on a lot of the responses, you have to build in biases in the model. I've said this before publicly, I'll say it again we build bias in our systems because if we did not just use the type of preferences or decisions that an organization likes to use, they just wouldn't use it. Because, in other words, if you built a compromise model which is really what I think people are talking about where you have different organizations contributing, quote, unquote data, what you end up having is just a compromise model that no one's going to be very happy with.

James Lee:

And so what we do is we actually build out specific data warehouses for the clients and so like. For instance, we have a number of auto manufacturers as clients and they face the same opponents. What's really interesting is that they respond very differently, philosophically, from one organization to another. If I was using some sort of you know, joint community model, no one would be happy. So we're just building out specific data warehouses for specific clients, and so whenever they are facing particular allegations or discovery requests, that's where we're just pulling from that pool of possible responses. So you know, the risk of hallucination is almost zero at that point.

Sara Lord:

Okay. So then, if you're utilizing that contained data set to train your model, it feels like it's really going to work best for firms that have a recurring case type. So when I get a novel case in something I don't generally work on, how well does this tool work for that?

James Lee:

Yeah, we are just primarily priming this to specific domains, and so there's a certain swim lane. So, in other words, works just really well on that swim lane, but if it's another personal injury case it goes into another swim lane. So, in other words, it works just really well on that swim lane, but if it's another personal injury case it goes into another swim lane because it has different models that it's pulling from.

Sara Lord:

Okay, and because you're using only that firm's information, that firm's source data. If you have another client that does a lot of work in personal injury and I don't, that knowledge from that personal injury data set doesn't flow in to the legal mission that I use it doesn't.

James Lee:

And you know, here's something else that is not surprising, I think, for all of us on this and the people who may be listening. Lawyers only think that their work product is the best, and I get this all the time from certain lawyers and law firms that are worried that their competitors are going to steal from them. They're not going to steal from them. They don't like, they don't believe that any other firm's work product is going to be better than theirs. It's really incredible. Whenever we try to introduce this concept of a community model, they reject it almost instantly. They don't want that. They want their own stuff, they want their own language, they want their own level of, they want their own tone, their tenor.

Tom Hagy:

It's just they're very particular in terms of how they want their work product to look In defense of people with low opinions of themselves. I do know a lot of lawyers would grab pleadings and things and borrow generously from them a lot. Sarah, did you have a follow-up to that?

Sara Lord:

Yeah, just within a firm, especially a larger firm, which larger firms may have more than massive data required to effectively train the system. Admittedly, there are smaller firms that work in one swim lane and they will also have a good set of data, but for firms that are large enough that you have different partner personalities, all of whom have their own language that they like to use their own style, does the tool have any ability to provide results customized to the actual? Yes, awesome, tell me about that.

James Lee:

Yeah, no. So essentially, when we collect data and we store it, it's essentially adding a metadata feature label of the group or the lawyer. And so when the user comes in and wants sort of the selection of the knowledge management of a particular group or a lawyer, you can basically switch that on and so it would only start then pulling and looking at that selection.

Sara Lord:

Great Tom, I'll let you talk again.

Tom Hagy:

Oh no, thank you, sarah. No, I appreciate it. No, thank you, you know. I just want to ask you one thing. You said earlier. You used the word bias. You said you build bias. Could you just elaborate so for people who hear the word bias, they immediately think of negative things.

James Lee:

What did you mean when you said that?

James Lee:

Yeah, no, it's like I said, there are leanings that certain lawyers have about the way they may want to answer certain requests or certain perspectives, perspectives, perspectives and, like I said, having worked with particularly lawyers but I also believe this to be true of just general users across the board that I think you need to take into consideration their style, their preferences, their views, their worldview of how they want to express themselves.

James Lee:

And in order to do that, you need to build out sort of these mini models. Which is actually what's happening in sort of the enterprise world, just so you guys know, is that the use of these large language models is probably going to give way to sort of smaller, domain-specific models, and I think what's going to end up happening as you move forward is people are gonna begin to realize that in order to have more successful results or outcomes, you're gonna have to build out more preferences that a model will have to pick up in order for a user to start using it. Otherwise, like I said, I think a compromised model is going to end up being novel at first, there's no doubt, but when companies and enterprises try to implement them, they're going to be very frustrated, which is actually what's happening with a lot of the large language model experiments that are going on right now.

Tom Hagy:

Yeah, it just reminds me of a couple of things as I've written for other people for a lot of my career. Uh, learning people's individual attorney styles is, uh, it's, it's, it's tricky, Um, but you know, I've gotten things back Like I would never talk like that.

James Lee:

No, I mean, look as a. It was very difficult drafting stuff for different partners, Yep, and what I ended up doing mostly was I would look at exemplars from the partners so that I could conform the style and tone to what he or she may expect or want. And so you know, people don't realize it, but that sort of work process has been going on for decades.

Tom Hagy:

Now you have these other services discovery, response, matter, profiling. I guess it's all basically the same idea, but tell me how they're different or if they are.

James Lee:

Yeah, I mean. Well, they're all basically the same in the sense that you know we were starting at a start of a lawsuit and then, like what happens at every what happens as dominoes begin to fall, what's the next natural process? And all of them basically do the same thing. There's an input of a demand of something. You have to respond to discovery questions. You have to respond to a demand letter, you have to respond to a lawsuit. There's some input document or documents that you need to essentially respond to. We tie that into essentially knowledge management data warehouses of the historical decision-making that organizations have that enables that person or a user to essentially very quickly go through and respond to the input document so that at the end of this process you have a fully fleshed out document to server file.

Tom Hagy:

Matter profiling. I think that's extremely important and if you could talk a little bit about how is matter profiling done, or how is it done traditionally and with your service?

James Lee:

Traditionally it's a highly manual process and I hear horror stories of organizations that try to do it manually in order to build out maybe some training data and then what they find out is that number one it's inconsistent.

James Lee:

Number two, it's hard to scale because oftentimes the turnover on these human teams, because it's a low skill task and people would get very bored.

James Lee:

What we do is essentially we use a combination of all automated processes, but essentially what we're doing is we're extracting the data and then building out models to essentially determine what that data field is, and so it could be incoming forms, it can be even documents like settlement agreements. You know we're able to basically figure out how much in settlement was actually paid or in the document itself. You know we're working on a project right now where we can determine the arbitration rules that are involved in an arbitration document and what the other. You know whether it's one arbitrator or three arbitrators, or you know it's silent. I mean we can do all those types of things that I think again, it takes a lot of time manually and now you can do automatically. And that's really encouraging to see because now you can start building what Martin Will likes to call a data flywheel where you can imagine as a case or document comes in. You basically collect all that data, but then it's actually now being pushed to the next series of activities throughout the entire life cycle of that lawsuit.

Tom Hagy:

You all just came out with something new. A lot of excitement around demand letter responses. Tell me about what problem you're trying to solve there and again. If you could do kind of like how it's been done traditionally and now how it's doing you, do it with your service.

James Lee:

Yeah. So again it's the same sort of thing about it's an input that requires a response, and then you know, we want to try to capture the knowledge management stores of an organization and then help out the user, generate an output, and so again, just the next sort of thing and sort of a litigation lifecycle is sort of demand letters, and, by the way, the number of demand letters that an organization has to respond to is much higher than the number of lawsuits filed in America. And so you know, the idea is, for instance, you know, one of the solutions that we that you're talking about is the ability to answer an EEOC administrative charge document, which the output is just basically a letter, a letter form of a response. And so there on our system, the user is able to upload not only the charge document but any supporting information, and then we're able to essentially process that, identify all the key issues, again a version of the matter, profiling and then essentially offering suggestions about what the legal arguments ought to be. And this is another good example of where an LLM actually is appropriate, where you begin to provide some short summaries of certain key aspects of the case so that it really speeds up the process of responding to one of these letters.

James Lee:

The feedback that we've been getting is that this is the type of work that may take up to maybe four to eight hours to basically draft a letter, and on our system they're basically getting it out in less than an hour. So it's working out great and so. But you know again, demand letters comes in many. You know flavors. It's not only EEOC documents but it can be like insurance denial letters. You know all those types of things that make this essentially a very robust area for automation.

Tom Hagy:

So you mentioned Walmart, you mentioned insurance industry kind of who are the best targets you see for this kind of thing? Walmart obviously has a lot of cases. Insurance companies have a lot of cases who else?

James Lee:

I have a funny story about this. It has to do with me washing dishes. Okay, I was washing dishes one night my wife makes fun of me and says hey know. Says hey, you run an automation company, why aren't you using dishwasher? And you know, I, for two weeks I could not answer this question and it really bothered me. I was like why am I not using a dishwasher? And I realized I just don't do enough dishes. That's not a pain point for me where I'm going to bend down and stack the dishes in a dishwasher. And so I tell you that story because for any automation to work, an organization has to have a lot of dirty dishes. Who has a lot of dirty dishes? Ie lawsuits and these types of claims. Well, insurance is the largest by far. You talk about the number of personal injury cases. They have insurance, motor vehicle premises liability. They are by far the number one spenders. I think the next series are large manufacturers with their warranty litigation and consumer litigation another large group. Again, just a lot of dirty dishes there. And then I would say any large organization, any Fortune 100 company that has a lot of interactions with consumers, with government, with just with anyway, just where they have a lot of litigation. Financial industry, services industry has a lot. Those are the type of primary organizations where automation really works really well. Right, you got a lot of dirty dishes.

Sara Lord:

When you're evaluating fit. So, when you're Iei talking to a new potential customer and trying to decide if this tool would make sense for them, are you looking for a specific amount of data? Is there a specific mass that they should have when it comes to getting started with your tools?

James Lee:

Yeah, I think that's a great question, because we did this experiment with one pharma company where the question was well, how many cases or how many examples of a lawsuit do we need to see before the system will perform in? I'm going to call it a 90% plus accuracy range, and what we found out was if we're talking about like one pharmaceutical drug, so in other words the contours of that dispute are very well defined, because it's one drug with very well-defined issues, as little as 100 lawsuits and answers was enough to get that level of accuracy. For other domains where, let's say, employment, where you have up to 20 different causes of actions and then within each cause of action you have many subtypes, you're going to require a lot more than that, maybe 500 to 1,000 minimum, before it really starts acting in a way that is fairly predictable. Otherwise you end up with some wonky results and some inconsistencies. So it's a sliding scale is maybe the best way I can describe it.

Tom Hagy:

Good, Well, that's all I had. I just want to know what's next. Is there anything next coming out that we should be looking for from you guys?

James Lee:

Yeah, so I mean, we're building out a subpoena response tool that should be coming out in a couple of months. We're also releasing a deposition assistant tool and I realize there's a couple out there in the market already. But one of the great things that we discovered about our system is that all that sort of know-how that we've built in on responding to discovery, for instance, and being able to find semantic similarities, means that we can increase, signal, decrease the noise when sending materials out for analysis. And one of the exciting things about this is the ability now to look across dozens and dozens of depositions for consistencies or inconsistencies, and one of the things that I'm really sort of. You know, you'll be the first to know about this, actually in public, but we have a project with the California Appellate Project this summer.

James Lee:

Who are they? Well, they are the death penalty appellate lawyers that the state of California funds. So in the state of California, every time there's a death penalty conviction, the defendant has an automatic appeal, and so this organization is basically a law firm, is essentially being tasked with providing an appellate defense. You know thousands and thousands of pages of transcripts for you know potentially relevant testimony that can help in overturning some of those death penalty convictions, and so we have a couple of fellows from Stanford and Georgia Tech that will be helping in that and, you know, apart from anything that we do on behalf of enterprises, this is just a really cool, interesting use case where, to the extent that we can find real evidence of discrimination, bias and testimony, and closing arguments and opening statements, and voir dire, it's going to be a great way for us to sort of, you know, help out, you know, marginalized guys. It's just one of those things that I think is pretty cool to do.

Tom Hagy:

That's important work. Sarah, you had anything else. I'm going to let James get back to it.

Sara Lord:

Sounds good. This has been a great discussion, so I appreciate you inviting me to participate.

Tom Hagy:

Well, James Lee, thank you very much for talking to us.

Sara Lord:

All right.

Tom Hagy:

Thanks.