Emerging Litigation Podcast

Litigation Prognostication with Dan Rabinowitz

May 01, 2024 Tom Hagy Season 1 Episode 83
Litigation Prognostication with Dan Rabinowitz
Emerging Litigation Podcast
More Info
Emerging Litigation Podcast
Litigation Prognostication with Dan Rabinowitz
May 01, 2024 Season 1 Episode 83
Tom Hagy

The art and science of forecasting litigation outcomes just got a lot more sciencey.

Years of immersion in complex business disputes is bound to shine a light on problems begging for solutions. In this case, our guest observed the laborious and ineffective slog that is trying to forecast how long a case will take, how much it might cost, which jurisdiction will treat it with kindness, or how a judge might rule on a motion for summary judgment.

These are some of the critical questions our guest set out to address through the use of technology and assessment of massive data sets. He is Dan Rabinowitz, Co-Founder and CEO of  Pre/Dicta, a six-year-old company that provides litigation prediction and forecasting services.  Before Pre/Dicta, Dan was an attorney in Sidley Austin LLP’s Supreme Court and Appellate Group and the firm’s Mass Tort Litigation Group. Later, he served as trial attorney in the U.S. Department of Justice, general counsel to a data science company, and associate general counsel, chief privacy officer, and  director of fraud analytics for WellPoint Military Care.

Listen to what Dan has to say about how the power of technology is going to make predicting litigation as commonplace as predicting the weather. He also shares insights into a study Pre/Dicta conducted that tested assumptions about judges based on their political affiliations. 

*******

This podcast is the audio companion to the Journal of Emerging Issues in Litigation. The Journal is a collaborative project between HB Litigation Conferences and the vLex Fastcase legal research family, which includes Full Court Press, Law Street Media, and Docket Alarm.

If you have comments, ideas, or wish to participate, please drop me a note at Editor@LitigationConferences.com.

Tom Hagy
Litigation Enthusiast and
Host of the Emerging Litigation Podcast
Home Page

Show Notes Transcript Chapter Markers

The art and science of forecasting litigation outcomes just got a lot more sciencey.

Years of immersion in complex business disputes is bound to shine a light on problems begging for solutions. In this case, our guest observed the laborious and ineffective slog that is trying to forecast how long a case will take, how much it might cost, which jurisdiction will treat it with kindness, or how a judge might rule on a motion for summary judgment.

These are some of the critical questions our guest set out to address through the use of technology and assessment of massive data sets. He is Dan Rabinowitz, Co-Founder and CEO of  Pre/Dicta, a six-year-old company that provides litigation prediction and forecasting services.  Before Pre/Dicta, Dan was an attorney in Sidley Austin LLP’s Supreme Court and Appellate Group and the firm’s Mass Tort Litigation Group. Later, he served as trial attorney in the U.S. Department of Justice, general counsel to a data science company, and associate general counsel, chief privacy officer, and  director of fraud analytics for WellPoint Military Care.

Listen to what Dan has to say about how the power of technology is going to make predicting litigation as commonplace as predicting the weather. He also shares insights into a study Pre/Dicta conducted that tested assumptions about judges based on their political affiliations. 

*******

This podcast is the audio companion to the Journal of Emerging Issues in Litigation. The Journal is a collaborative project between HB Litigation Conferences and the vLex Fastcase legal research family, which includes Full Court Press, Law Street Media, and Docket Alarm.

If you have comments, ideas, or wish to participate, please drop me a note at Editor@LitigationConferences.com.

Tom Hagy
Litigation Enthusiast and
Host of the Emerging Litigation Podcast
Home Page

Speaker 1:

Welcome to the Emerging Litigation Podcast. This is a group project driven by HB Litigation, now part of Critical Legal Content and VLEX Companies, fast Case and Law Street Media. I'm your host, tom Hagee, longtime litigation news editor and publisher and current litigation enthusiast. If you wish to reach me, please check the appropriate links in the show notes. This podcast is also a companion to the Journal of Emerging Issues and Litigation, for which I serve as editor-in-chief, published by Fastcase Full Court Press. And now here's today's episode. If you like what you hear, please give us a rating.

Speaker 1:

There, of course, was a time when we could not predict the weather, but that didn't stop us from trying. We looked at things like what time of year did the acorns start falling off the trees, or was there an odd smell in the air? I say we, like I was, you know, a weatherman during the Civil War. Apparently, if the frogs are suspiciously quiet, that was some kind of a signal, and today still did a plump rodent who was dragged cranky from his den cast a shadow. So yeah, we've always tried, without science, and at times with disastrous results. Of course, we also could not predict how tall a baby would be when he or she became a full-grown adult, but we thought we could. I was born during the Eisenhower administration. When I was three, my pediatrician told my mom because I wasn't listening, I was probably trying to put my fist in my mouth that I would grow to be six feet and three inches tall. My mom was thrilled. She was also Italian and no one in her family was even six feet tall. That would have been a tower. But based on my doctor's calculations and given my dimensions in 1960, I would grow tall enough to fetch for my mom a carton of cigarettes off the highest shelf in any grocery store or pharmacy or church. Six foot three, all I can say, is not even close. Maybe it was the secondhand smoke.

Speaker 1:

A couple of episodes ago we talked about career paths litigators take outside of. Litigation Certainly makes sense. When you have years of immersion in complex business disputes, it's bound to shine a light on some problems that are out there begging for solutions, apart from bringing your case to a satisfactory resolution. Our guest is another such person, and what he observed was a laborious and ineffective slog of trying to forecast how long a case would take, how much it might cost something pesky clients always want to know or in which jurisdiction a case might have the greatest hope, or how a judge might rule. That's the Holy Grail on summary judgment motions. So these are among the problems he set out to address. And now, in the age of large data sets and artificial intelligence, he believes predicting outcomes across the span of litigation will become as routine as checking the weather. And who knows, maybe I'll have a growth spurt.

Speaker 1:

He is Dan Rabinowitz. He is co-founder and CEO of Predicta, a company that provides litigation prediction and forecasting services. Dan practiced as an associate in the Sidley Austin Supreme Court and Appellate Group and in the firm's Mass Tort Litigation Group. Later he served as trial counsel for the US Department of Justice and general counsel to a Washington DC-based data science company. He was also associate general counsel, chief privacy officer and the Director of Fraud Analytics for WellPoint Military Care. Yeah, listen toward the end. You know we have about 890 federal judges and roughly 30,000 state court judges.

Speaker 1:

Dan and his group did a case study when they set out to test an assumption that how a judge might rule can be determined in part by the president who appointed that judge, or that judge's political party, or that judge's gender. I found the outcome fascinating and, as a voracious news consumer. I couldn't help but notice lately how often the media, when reporting on, you know which judge is overseeing a case, or judges ruled on a case, how they refer to their political background or the president that appointed them. That's new. So look for a link to that case study in the show notes. I think you'll find it interesting. And now here's my interview with Dan Rabinowitz, co-founder and CEO of Predicta. I hope you enjoy it. Dan Rabinowitz, thank you very much for taking the time to talk to me today Absolutely Thank you for inviting me.

Speaker 1:

We're talking today about your innovative product that you have called Predicta. I've already introduced you and a bit about the product, but just reading your description of what it does gives a monumental advantage to lawyers in the ability to strategize for risk exposure, litigation likelihoods, settlement strategy, et cetera not just law firms but companies too, and so this is about predictions. So first, if you could talk about your professional journey, I found it interesting.

Speaker 2:

Absolutely so. I'm a former practicing attorney. I started my career at a large law firm and eventually did in the DC area and eventually did, if you will, the DC circuit of jobs going into the Department of Justice and then eventually in-house at a government contractor. But while I was at the large firm as large firms really do the best and cover all the bases when they're litigating on behalf of their clients I was once tasked with a project whereby I was trying to assess or predict how a judge might rule, and the task was go through all of the judge's motions to dismiss rulings and see how then the judge would rule it for our motion to dismiss. Now, when I went back and did that, first of all there were only maybe 20 or 30 motions to dismiss in total that were reported or unreported opinions, and none were in the area that we were litigating products liability and I was just struck by how that might not be the best approach in order to form a prediction, and that is simply because there's a lack of data as it relates to opinions. Judges write opinions in fewer than 2% of all motions to dismiss, slightly higher when it comes to other types of motions. So with that limited data set, you know you're missing either. You know anywhere from 95 to 98% of all data.

Speaker 2:

Needless to say, to extrapolate from that is not the best way to understand statistics, but in the course of my legal career, I was exposed to the concept of big data, data analytics and harnessing that power and, in particular, trying to look at data in a very different way than I think most attorneys would look at data as it relates to the judicial proceedings.

Speaker 2:

When I was exposed to it, it was much more in the idea of creating linkage between non-obvious elements. So, rather than simply looking for a direct connection let's say you know a particular legal question and then an outcome there's so many other factors that we can use in order to form predictions and, of course, we always have to distinguish between outcome and prediction. A prediction is just as it says it's how we think, or how we believe or how we've done the data analysis. In our instances, it's not based on belief, but how the case or the judge might rule. That is, of course, different than whether or not you're you know how you're going to craft your brief or what legal issues you may raise. So we then, in terms of predictions, it enables us to open the aperture significantly as to which characteristics and elements that are involved in any given case might have predictive value. So that's you know where I got to and then ultimately decided to really pursue that avenue, rather than, you know, continuing to practice law, however much I enjoyed that.

Speaker 1:

Yeah, okay. So so tell me about. You talk about the aperture and opening up, and then when? What is the? What is this, I guess, algorithm? Or what is your service trained on?

Speaker 2:

Yeah.

Speaker 2:

So what we, what we wanted to do is, rather than looking at that limited data set that attorneys typically use in legal research, again for the predictive elements, we had to look beyond that. So the largest data source of decisions of outcomes rather than opinions is, of course, either PACER or an equivalent system within state courts, and that records every judicial event, whether it be an entry of appearance, telephonic conference, but also it records all the decisions that the judge makes, and by decisions that means the grants and denials, so there's no context to those. So, now that you have a large enough data set, you're now limited, though, because you don't have any of the rationale as to what led to that outcome. So we've, if you will, solved for one problem, but then we need to solve for a different problem with how do we now use that information. And, of course, simply using it from a statistical perspective, from a raw statistical perspective, is non-predictive and, in fact, in most instances, misleading. And that means that if you would look at a judge and you go use PACER data and you say, well, the judge grants motions to dismiss in 80% of all cases, anyone with even a basic understanding of statistics knows that that does not mean you have an 80% chance of your motion to dismiss being granted, Because, unless you know, it puts you in the 80% or puts you in the 20%, that statistic is meaningless. I mean, it's nice to know from, maybe from a historical perspective, if you're tracking trends and you're an academic, but certainly from a practitioner or from a client perspective, that number alone doesn't get you anywhere.

Speaker 2:

So we took those numbers, though, and then what we wanted to do was try to understand if we could see patterns within those numbers. So, again, rather than trying to discern which you cannot the underlying rationale, we said let's look at those numbers 80 percent, 70 percent% and try to see if, using machine learning and eventually AI, can we see patterns within them. Now, the patterns that we wanted to identify because, ultimately, our analysis bore out that the predictive factors in cases are the parties and the attorneys and I'll get to the judge in a moment are the parties and the attorneys, and I'll get to the judge in a moment and by parties and attorneys. From a data perspective, what we do is we break the parties down and the attorneys down into component parts. So, rather than simply looking at a company as just the name of the company Bayer. How many times does Bayer win a motion to dismiss? Again, you're faced with the same challenge that, unless you know what drove those cases, that you know they won 90% of the time or lost 10% of the time. That's not helpful. So instead, what we do is we create an ontology or a structure as it relates to every party and every attorney. So Bayer becomes a pharmaceutical company that's also a medical device company that's publicly traded, that's on the S&P, that's located internationally, it's internationally headquarters, has a certain amount of revenue.

Speaker 2:

Now, once we break the party down and then we do the exact same for the attorney and the law firm, we now have, rather than simply having Bayer versus GlaxoSmith on one side and the other side, we now have, if you will, a whole string of characteristics versus a whole string of characteristics on the other side of the V right. So now we've taken these four variables plaintiff and defendant, and then their representative attorneys and now we've created all of these different characteristics and component parts if you will, the DNA, if you want to look at it that way of each of those four variables. Now, once you've broken it down to those you know and you've created this DNA structure, or just for purposes of analogy. Now you can start seeing patterns within that. Well, if it's a medical device company but not a pharmaceutical company and it's domestic domestic and it's privately held, can we see, as we parse through the data, any patterns?

Speaker 2:

Now, of course, in order to do this type of analysis, you need massive data sets.

Speaker 2:

So we looked at I think the number now is around 15 million or so different cases and docket entries in order to create and understand these particular patterns.

Speaker 2:

And that's the only way that you really can do this, because, of course, if you have a very small data set, inevitably you're going to run up again that you're not going to have sufficient representation of any given type of characteristics. Now, once you do that, of course you also have to weigh and determine which of these are signal, which of these are noise. You know, when you're looking at statistics, you always have to understand. Sometimes what you're looking at is irrelevant and you might see a pattern, but the pattern is not really meaningful and that would be the noise. And then, of course, you have to identify the signal. So, once we identify those patterns now, what we can now do is incorporate that other key component part to it and that is the judge, and we do the exact same process that we did for the parties and the attorneys is the judge, and we do the exact same process that we did for the parties and the attorneys with the judge. So again, we break the judge down into their DNA and, of course, their DNA, you don't know Can.

Speaker 1:

I interrupt for a second, of course. Talk to more about the attorneys doing the DNA of attorneys, sure, so what kind of things do you look at with them?

Speaker 2:

So what we're generally looking at is association with firms and we're trying to understand an attorney associated with the particular firm. What does that structure look like for any given attorney? There isn't enough data, and this is the problem that you actually have with the current approach that many attorneys take, for better, for worse. Many attorneys will say you know, and I experienced this certainly, where you know the firm is, you know, just signed up a new client and they have a piece of litigation and you know, district of Iowa before a certain judge and they send out a firm-wide email. Does anyone have any experience with judge? So-and-so, you know, we just you know we were. We just you know our client was just sued there, and so on and so forth. And inevitably you'll have one or two lawyers that you know respond saying oh yes, I appeared before her.

Speaker 2:

Especially tough on defendants. Now, what they fail to tell you is that they appeared before her as an AUSA. Ninety percent of their cases were criminal. This is, of course, a civil contract matter. Were criminal, this is, of course, a civil contract matter. And even if they appeared before with any meaningful amount maybe 10, 15, 20 times the judge might hear three, four, five cases a day and rule on that many, so their experience is so limited. Again, it goes back to the opinions.

Speaker 2:

But yet attorneys rely on those very rudimentary assessments of judges. Now when they're doing legal research, they would never, you know, really rely on that small of a data set, right, they wouldn't find one case or two cases, right, that they got from their colleagues, that feedback that they got, and say, okay, I'm done writing the brief, I found the two cases, I don't have to do any further research Yet when it comes to sort of the identification of judges and how they might interact with the particulars of the case, that's what they do Now, to be fair to them, the reason that attorneys do that is because there is no effective way to do that analysis through legal research. The only way to do that type of analysis is through the approach that we have, through this behavioral analysis, behavioral analytics, and we're trying to look at different factors, alternative factors, because, again, we aren't interested in writing briefs, we're interested in predicting judicial behavior and that's very, very different and that in many ways you know, as you might imagine, when I talk with attorneys about our capability in some instances to predict with a near 85% rate of accuracy without looking at the facts in the law. Attorneys are incredibly skeptical and they say, no, I can do a better job, or how does that make any sense? I wrote the best brief.

Speaker 2:

Now, of course that's true, and we don't. Our approach is not intended, if you will, to enter the realm of legal practice. That is totally apart from what we're doing. We are not looking for outcomes, we are simply looking for predictions. And in order to get to predictions, you do have to step outside of the traditional legal research, the traditional legal arguments. So that's where you know our the difference in understanding and appreciating discerning human behavior versus discerning legal arguments or determining legal arguments is it is on. It is almost like two different tracks that totally apart from one another yeah, makes total sense.

Speaker 1:

So let's talk about the, the different applications. I mean you, that, that you all, that you all support attorneys in with motion prediction, litigation timelines, motion models, judicial benchmarking and venue selection. So you've talked a bit about about this. You know motion prediction, but it also helps, you said, with litigation timelines. Does that work?

Speaker 2:

yeah. So you know, again, let's go back maybe to the way that this has been done in the past and that way we can perhaps see the difference and in our approach and why our approach is considerably more valuable now, when attorneys are trying to determine, well, how long is the case going to take? Right, it's the question that they're of going to get asked by their client, because time is money, right, at the most basic level. So, in terms of estimating budgets, in terms of estimating resources, whether it be on the client side, right, how long are they going to have to have this distraction of litigation On the attorney side, you know, how long are they going to have to test an associate, or just generally. So, again, to go back to the raw statistics. So if you look at raw statistics and you say, well, I can look at, you know judge X and she takes overall, in terms of her cases, whether it be for it's, you know, summary judgment, or just overall, you know 350 days for summary judgment and if it goes to trial, 700 days.

Speaker 2:

Now, if you're not looking at the key characteristics of the case, those numbers can be incredibly skewed and it's a very simple if you just take a very simple example. If, for example, like, there was 359 or 360 days for summary judgment, let's just say, if that number includes a lot of individuals litigating against one another, with you know lower dollar values, with you know smaller firms potentially, and you're litigating you know it's a big firm, an Amlaw 50 against another Amlaw 50. And there is, you know $1.2 billion at issue and there are multiple plaintiffs and multiple defendants. Needless to say, that that statistic of 360 days because it includes, you know, 95% of those are when you know it's single plaintiffs against single defendants that has no bearing on your case. That has no bearing on your case. So if you would use that and you tell your client, well, the average time for summary judgment is a little over a year. Needless to say, five years in your client's gotta be what happens here, right? So our approach again is to identify similar cases and again, similar cases means that those five characteristics the parties, the attorneys, the respective attorneys and then the judge. So we want to find cases that are alike one-to-one, because if you can find other cases that are almost 100% similar, or how we create our models, which weighs different component parts, so therefore we can get to a high degree of similarity, even if not every single component matches up. So now you can say with a straight face oh, it's actually going to take 720 days to get to summary judgment. If it goes to trial it'll take X number.

Speaker 2:

But then, beyond that, what we also recognize is that there are many different ways that a piece of litigation can conclude. Of course, motions is one, motions for dismiss, motions for summary judgment. But then there are any number of other ways settlement, there are alternative forms of dismissal. You know all these different factors and each of those depending on and here's really where the attorney's experience and you know their research about the case and their analysis about the case really comes in, especially if they're looking at the very beginning, when they're looking to provide their clients with serious estimates and budgets. They can then do their research and say, well, I see this case going two ways. Either it's not going to happen on a motion to dismiss, it's too complicated, for whatever reason. Instead, we see this as going to happen on a motion to dismiss. It's too complicated, for whatever reason. Instead, we see this as going either to settlement or to summary judgment. So now they look at our timeline and they say, okay, well, if it goes to settlement, it's actually around six months shorter than it would take for us to go to summary judgment. So now you know you're litigating the case and of course you gave your estimate at the beginning. But now this goes even beyond the estimate and budgeting.

Speaker 2:

If you're litigating your case and you receive a settlement offer, well do you take it? Say, well, I have a really strong summary judgment argument, but if that's going to take you another six months or a year of time and cost, well, that's something that certainly should come into the analysis of how you should approach that settlement offer. Maybe you don't take that one, but you take a slightly higher one. But at least you now understand and you now have an appreciation of what the cost of summary judgment is, because it's not cost-free. Right, you might have to go through any number of additional depositions. You might have to hire experts if you don't already have those, and then, of course, just the briefing and the argument. And again, every piece of litigation is disruptive for most companies and you can't minimize that. Companies produce widgets or whatever it is. They certainly shouldn't be involved in litigation any longer than needed. So that's where our timelines are particularly effective in all of those sorts of use cases.

Speaker 2:

Now, when it comes to our motion models, it's very similar where, again, we're looking to provide a one-to-one, rather than simply saying, again, a judge dismisses 80% of the time, we're looking to find any number of cases and judges that are like your judge and your case. So now we can say, well, summary judgment in your judge and your case is 52%, so now you've been confident about that. Now, one of the really interesting applications of our approach to understanding how to identify relevant statistics is we aren't limited by the judge, meaning, because we've now broken that judge up into their DNA, there are many other judges that might share enough similarity within that DNA, and that allows us to capture additional cases beyond the judge, and this is especially important for motions that are truly significant and perhaps case-changing. But any given judge has a very small number of those and that is a class certification motion Most judges have, I would say, less than five if you look at all federal judges. So if you want to say, well, the judge last time granted this or two out of five cases, the judge granted this three out of five right. So 60 or so percent right. That is way too small of a data set to make any sort of statistical leap. But if we can incorporate, let's say, 150, because we've identified enough judges or even 75 cases, enough judges that are similar to your judge, that more or less are the same, and we can include now their cases. So now we can actually provide a meaningful statistic that you could use to assess what the likelihood of class cert being granted here Now.

Speaker 2:

You also mentioned venue selection. Now, this is another way or another area where the approach of being agnostic to the facts and the law enables us to do something that is nearly impossible otherwise. So, prior to a case being filed, even after a case was filed, I come back to a transferring venue, before a case is filed. How do you determine where to file that case? You can say, well, there's, you know, the law in this jurisdiction might be better than that jurisdiction. Or we have plaintiffs who we think might be, you know, more amenable to class certification than the other.

Speaker 2:

But how do you really assess? You know that's very nice, but you're going to go before a judge, and how is that judge going to rule? So we can run our analysis even before filing. And the reason is is because we don't account for the law and for the facts. What we do is we simply look at who the parties are and who the attorneys are. Now, of course, when you file in any given jurisdiction, depending on the jurisdiction, there's three judges, five judges, 20 judges. So how do we account for the fact that you don't have a particular judge? So what we do is we, in a way, a very simplistic level. We aggregate all the judges although, depending on any number of factors, each is valued differently and then we provide an overall score for the entire jurisdiction.

Speaker 2:

So you can now say well, if we file in this jurisdiction, we have a 70% overall chance of surviving a motion to dismiss, and if we file in this jurisdiction, we only have a 30% chance of surviving a motion to dismiss. Now, I've always heard the complaint. You know like this is venue, you know, forum shopping, right, is it forum shopping? If you look at the law and you say, well, the law in this instance is 30%, you know it's not in our favor, and this jurisdiction it's 70% in our favor. And if an attorney said, despite the fact that the law in this jurisdiction is really not in our favor, we're going to file there anyways, that would probably I don't want to say necessarily be malpractice, but certainly approaches malpractice. So how can you then say that we're doing something wrong when we're simply taking the judicial elements into account? Now, this is not only for plaintiffs looking to file.

Speaker 2:

We actually had an instance where a client approached us and they had already been sued. The client had already been sued and they were in one of the center in florida in federal court and they wanted to know their likelihood of success on their motion to dismiss. Of course you know the litigation had anywhere between, you know, 25 and 40 million dollars at issue. That was the claim. And sure enough, we ran.

Speaker 2:

You know our analysis and we said you actually have a very, very low likelihood of your motion to dismiss being granted. To translate that, that means it's going to go to discovery. There's going to be an incredible cost to the client and maybe you'll get out with some settlement, hopefully less than whatever the original demand was. They said one of the plane, one of the defendants, excuse me was in california. What's our likelihood? In the central district of California we ran that and they had an over 70% likelihood that their motion to dismiss would be granted. So simply by using this, they literally may have saved their client tens of millions of dollars by running our analysis, and our analysis is so simple that it just requires them. All we need to do is look at their case and then it takes us under five seconds to provide that information, to provide information that can literally save tens of millions of dollars simply using this and deploying this strategically.

Speaker 1:

You said you're looking at behavioral analytics when it comes to a judge. What goes into that?

Speaker 2:

that's not available in legal research, so what we're doing with the judge is, again, we are not looking at their opinions or decisions. We don't care about the judicial philosophy or approach to the law. Again, we're looking at their genetic makeup. So in this instance, rather than breaking down the parties into publicly traded et cetera, we're looking at the components or characteristics that make the judge right, that make us as human beings. So those are our experiences, who we are right.

Speaker 2:

So we're going to look at, of course, the obvious ones law school, where they practice was it in public service or private practice? Maybe were they a state court judge or politician before being elevated to the bench judge or politician before being elevated to the bench as well as non-obvious ones, let's say age or net worth or any number of other characteristics that have seemingly no relation to the law and truthfully, they probably don't have any relation to the law. But in terms of using that for behavioral analytics, those can be very valuable. And the way I like to think about this is when it comes to you know you're on your computer and you're doing a search, or you're, you know you're on, let's say, your newspaper, whatever it is, and an ad pops up and it says hey, you can vacation in the Bahamas and you say, wow, it was just literally talking to my wife about that 20 minutes ago and I hadn't thought of it before that.

Speaker 2:

And you know, immediately we think maybe Google is listening in on our phone or some other device and maybe or maybe not maybe or maybe not, but presumably the actual approach is something very similar, where they they understand all of our characteristics, they know where we live, they know who our neighbors are, they know our level of education, they know how much money we make, they know who we're emailing right and, with that information, as well as understanding our buying patterns. So, rather than looking at our judicial decisional patterns, they understand our buying patterns. So they can combine those two elements and then provide those highly targeted ads that predict, seemingly, our behavior. Right Now I am going to take that trip to the Bahamas. So that, in a way, is what we're doing we're combining the personality elements, the characteristic elements of the judge and then with their decisions and understanding patterns, and then linking that to how or what they're made of.

Speaker 1:

How does this work with appeals? Because I guess the data there wouldn't be as much data, but the judges do have careers, so they have rulings and behaviors. That said, does this?

Speaker 2:

apply at the appellate level. So we have not taken it to that level, but certainly in terms of our product timeline, that is an area that we are keenly interested in exploring. It is, you know, a little bit more difficult, but not, you know, it has a different challenge that of course, we have to account not for one judge but for three judges At the same time. That simply means conceptually that rather than having one genetic, if you will, character, we have three. So we would combine all three and try to decide, you know, patterns that are associated with those three, collectively or individually, and go ahead and move to the appellate realm. But certainly that's an area that we hope to explore because, you know, our approach is if we can understand behavioral analytics, if we can understand how people behave. That applies, like I said, to the Bahamas trip, to judges making decisions and presumably appellate judges. The one area and I'm asked this fairly frequently is what about Supreme Court judges, judges? I would argue that anyone that's looking to predict how Supreme Court judges will rule is probably on a fool's errand, because there's such a small data set If you think about the number of cases that they hear and then the number of judges or justices, which one is writing which opinion or decision. And, frankly, well, now we have, if you will like, the shadow docket, but for the most part, most of the most significant cases, there are written opinions and you can discern judicial philosophy based upon those. So there's really no need for behavioral analytics and, frankly, it's really impossible, I would argue, to use them as it relates to Supreme Court judges. So how is it going? What's your track record? So again, so the way that we look at a track record is not simply by looking at you know any handful of cases. You know if our clients have used them for 5, 10, 15 cases, even if you know we get all 15 right or we get 13 wrong. What you really have to do in order to assess accuracy is not simply, you know, hoping the best for you know 10 cases. Our analysis in order to assess accuracy is not simply, you know, hoping the best for you know 10 cases. Our analysis in order to determine accuracy, we exclude, we randomly excluded 50 000 motions from our models and then, after we, after we built our models, we would then run our model. So, in other words, our model had never seen those 50 000. It was, it was running those blinds, uh, and then our model would provide its prediction and then we would compare that against the real world outcome and that's how you test accuracy. You know, looking at a handful of cases obviously isn't all that helpful in terms of accuracy. Now, in terms of how our clients are deploying this, they're certainly deploying it and it really empowers them in a very, very different way and it offers them a different approach to how they can advise their clients, you know, for settlement, as I described, or which motions to litigate, where, to understand where the you know how and which motions they should be filing they should be filing Additionally, from their client's perspective.

Speaker 2:

When I went in-house, I assumed that pretty much it's the same as practicing law. You're no longer on the billable hour, which is kind of nice, and you get to tell other attorneys what to do. But I understood that there's a fundamental difference where your clients are no longer other attorneys. When you're in a firm, your clients are attorneys. When you're in-house, your clients are attorneys. When you're in-house, your client is someone on the business side and what they're really not interested is in some detailed, nuanced approach to the law that the attorney spent who knows how many hours writing and then doing research and coming up well, your case based on some new legal precedent, maybe it'll go this way or that way.

Speaker 2:

The way that most large businesses make decisions and they all have significant components of the organization is all based on data analytics.

Speaker 2:

It would be silly to ignore that, and in fact nearly every company of any size has recognized they need to crunch the data.

Speaker 2:

So, from our client, from the law firm and their client's perspective, they can now go to their ultimate clients on the business side and show them that they too are operating with the same approach.

Speaker 2:

They too are in sync with how the business is operating and they can give them something very different than simply saying, well, here's a memo or here's where our attorney is thinking about this. Now they can say look, we've done a deep data analysis. We've had, you know, the sophisticated algorithmic models determine how long this case will take. So we can now say and our attorneys have also looked at the facts in the law with confidence this is going to be a five-year slog. So that's very, very different than I think the way it's been done before, and that certainly really puts those attorneys and the firms that are using our product on a very different level than those that are still operating just with the more traditional tools of research, writing, and then, of course, not even getting into, you know, utilizing statistics that are meaningless or at many times wrong and just lead to incorrect assumptions and advice.

Speaker 1:

Well, speaking of assumptions, I thought it was very interesting. A very hot topic that you addressed in a case study was the assumption that whether a judge is considered liberal or appointed by somebody considered liberal or conservative, whether that has an impact on the outcomes their decisions with regard to corporations. So the assumption would be that a liberal judge is going to rule against corporations, a conservative judge will rule for them. Or maybe gender comes into play. Did Obama appoint them? Did Donald Trump appoint them? So tell me a little bit about what you found in that case study.

Speaker 2:

Sure. So, first and foremost, our ability to do that analysis is only because we've done that classification of the various parties and attorneys. In other words, if you go to any database or you try to discern well which cases involve corporations. Let's just start with that question because of the needs for our predictive modeling, we've already looked at that aspect of cases, made those you know, surface those and therefore enables us to go ahead and limit the cases that we're looking at that involve corporations. That's first. Second, what we've determined is that, relying upon any given characteristic, whether it be, you know, political affiliation, any given characteristic, whether it be, you know, political affiliation, gender, any number of those, certainly in isolation, does not form any, does not have any predictive value. Now, of course, in some instances it might be borderline, you know problematic from you know any number of discrimination and so on. But people cannot be reduced to one or two or three elements. Instead, if you really want to understand behavior, you have to look at some multifaceted analysis and what we were trying to do with that case study is tease that out on a very simple level and take the assumptions that people jump to all the time and show that actually, if you start combining different data points, it leads you to an entirely different conclusion. So, as you said, some people assume that political party and the Democrats or liberals are more biased against corporations, and when you look at that, overall there is a slight difference between judges appointed by Democrat presidents and Republicans. But that is, again, too simplistic. What we wanted to dig in and start saying well, what about if you incorporate gender? And again, there is some notion that certain genders are more favorable, and certainly when you combine them again with political affiliation, and what we actually found is that that is incorrect if you actually crunch the data. So, for example, female judges appointed by President Obama are as favorable, in other words, they are the equivalent of Republicans overall. So, rather than being, if you will, less favorable to corporations, they are as favorable as, if you will, the highest favorability towards corporations, favorable as if you will, the highest favorability towards corporations. Now, on the other hand, the least favorable towards corporations are female judges appointed by President Trump.

Speaker 2:

Now, let's sort of walk this back and take off our blinders. And recently, unfortunately, one of the greatest behavioral analytic thinkers, daniel Kahneman, passed away, and he's talked about a lot. How you know, we have these inherent biases, or he calls them heuristics, that we just take and we just make all these different types of assumptions, and this is a perfect example. But when and he's done this where, if you take a step back rather than simply jumping to the conclusion, you actually take a step back, and of course, his book is called Thinking Fast, thinking Slow to try to demonstrate this you actually are. You actually the light bulb goes off, if you will, and you say, huh, no, now I understand that. So this is, I think, a pretty good example about that.

Speaker 2:

If you think about and obviously this is somewhat generalistic, but I think is pretty true when you look at who President Obama appointed, what types of judges did he appoint? Well, he was appointing judges that typically went to top tier law schools that worked in big law, that may have left that for some major multinational corporation, right, and yes, they are. You know, they may have been donors or affiliated with donors or somehow have some political connection, but that's a very particular type of attorney. And now they're on the bench. Now let's look at President Trump.

Speaker 2:

Now, I think everyone can agree, irrespective of their political views, that President Trump is an anomalous president, as it relates to even Republicans. He's a different type of Republican. And when you think about the different type of Republican, and even how quote unquote pro-business he is or not pro-business, the judges that he appointed were not traditional Republican judges. They didn't necessarily come from the same pool, they didn't necessarily attend the same law schools, they didn't necessarily work in the same fields that other Republican appointed judges did. And so now, once you've taken that step back and start thinking slow, you say, wow, so that is certainly anomalous.

Speaker 2:

Now, once something is anomalous, inevitably it will present itself in an anomalous way.

Speaker 2:

Now it could very well be that they were the most favorable by 15 points, but here it's actually represented by they're the least favorable by 10 percentage points, by the closest grouping.

Speaker 2:

And again, once you take that step back, once you start thinking about, rather than simply jumping those conclusions or Republican, democrat, male, female, gender, not gender, I mean none of these. That is so reductive. If I walked into you know the way I like to cocktail party and someone introduced themselves and said, well, I'm a Republican. And I say, well, I know everything about you, right, that's essentially what, what you'd be doing, right, but that's not what you can do If you have a company like ours that's looking to provide, you know, highly sophisticated attorneys and their clients with information that they really can rely upon. You can't simply be that reductive, either in terms of simply by looking for biographical characteristics or looking on the other flip side of that, of looking at those raw statistics that don't have any connection with the particulars of the case at hand don't have any connection with the particulars of the case at hand.

Speaker 1:

Gotcha, okay, well, I think the outcomes of the conclusions from that study were fascinating because it goes against everything you might think one way or the other. As you said, female judges appointed by President Obama were just as business-friendly as GOP-appointed judges. President Obama were just as business friendly as GOP appointed judges. You know, in some differences you saw that there were only a couple points in either direction. So anyway, as I said before, I love actual facts and so and this is such an important topic right now that I think people will be interested to take a look at that and I'll flag that paper. So if people want to take a look and see what you did there, perfect, thank you. Well, dan Rabinowitz, thank you very much for talking to me today. This is a fascinating topic.

Speaker 2:

Well, Tom, thank you for having me. I really appreciate the discussion.

Speaker 1:

That concludes this episode of the Emerging Litigation Podcast, a co-production of HB Litigation, Critical Legal Content, VLEX, Fast Case and our friends at Losty Media. I'm Tom Hagee, your host, which would explain why I'm talking. Please feel free to reach out to me if you have ideas for a future episode and don't hesitate to share this with clients, colleagues, friends, animals you may have left at home, teenagers you've irresponsibly left unsupervised, and certain classifications of fruits and vegetables and, if you feel so, moved. Thank you for listening.

Litigation Prediction and Forecasting Innovation
Judicial Behavior Predictions and Analysis
Judicial Analytics for Legal Strategy
Analyzing Judicial Decision Making in Litigation