Emerging Litigation Podcast
Litigators and other legal and risk professionals share their thoughts on ELP about new legal theories or areas of litigation that plaintiff attorneys, defense counsel, corporations, risk professionals and others will want to be aware of. The host is Tom Hagy, long-time legal news enthusiast, former editor and publisher of Mealey's Litigation Reports, current Editor-in-Chief of the Journal on Emerging Issues in Litigation, and owner of HB Litigation Conferences and Critical Legal Content. ELP is a co-production of HB, CLC, Law Street Media, and vLex Fastcase. Contact Editor@LitigationConferences.com.
Emerging Litigation Podcast
Technology-Assisted Review: Sara Lord Interviews Data Scientist Lenora Gray
Our Legal Tech Host Sara Lord speaks with data scientist and eDiscovery expert Lenora Gray of Redgrave Data.
Discovery is a staple in any litigation practice, and it has been transformed by technology assisted review tools – or TAR. eDiscovery has developed into its own specialty – with eDiscovery experts on staff who know all there is to know about the technology, standards, processes, and practices.
But every litigator needs to understand how eDiscovery tools work. They should be able to answer questions around the approach being used, why that approach was chosen, reliability of the assisted review, human oversight implemented, and more.
This, like many areas of law, is filled with acronyms, specialized terminology, and a changing landscape – from technology developments to evolving legal standards to ethics competency issues. But because so much of the work is done by a technology vendor that has specialized tools, it can feel like your review is based on blind faith and that finding the pieces to support your case requires you to rely on dumb luck.
Can we do more than pray to the document gods? Listen as Sara Lord interviews Lenora Gray, Data Scientist at Redgrave Data.
Lenora Gray is an eDiscovery expert and data scientist who is skilled in auditing and evaluating eDiscovery systems. In her role as data scientist at Redgrave Data, she designs and analyzes structured and unstructured data sets, builds predictive models for use in TAR workflows, implements automation solutions, and develops custom software. Prior to joining Redgrave, she spent 12 years as a paralegal, a role in which she managed discovery teams. Lenora is currently pursuing her M.S. in data science from John Hopkins University and earned a B.S. in computer science from Florida Atlantic University.
I welcome back Sara Lord as legal tech guest host for the Emerging Litigation Podcast. A former practicing attorney with a decade of experience in data analytics, Sara applies her time at law firms and companies to explore and address the cultural and practical barriers to diversity in law, supporting value-creation through legal operations and client-first business-oriented practices. In her recent work as Managing Director of Legal Metrics, she led a team of experts focused on providing the tools to support data-driven decision making in legal operations and closer collaboration between law firms and their clients through automation and standardization of industry metrics. Sara earned her J.D. from New York University School of Law.
*******
This podcast is the audio companion to the Journal of Emerging Issues in Litigation. The Journal is a collaborative project between HB Litigation Conferences and the vLex Fastcase legal research family, which includes Full Court Press, Law Street Media, and Docket Alarm.
If you have comments, ideas, or wish to participate, please drop me a note at Editor@LitigationConferences.com.
Tom Hagy
Litigation Enthusiast and
Host of the Emerging Litigation Podcast
Home Page
Welcome to the Emerging Litigation Podcast. This is a group project driven by HB Litigation, now part of Critical Legal Content and VLEX Company's Fast Case and Law Street Media. I'm your host, tom Hagee, longtime litigation news editor and publisher and current litigation enthusiast. If you wish to reach me, please check the appropriate links in the show notes. This podcast is also a companion to the Journal of Emerging Issues and Litigation, for which I serve as editor-in-chief, published by Fastcase Full Court Press. And now here's today's episode. If you like what you hear, please give us a rating. So welcome back to another episode. We're going to get practical on this one, as we sometimes do Try to break or mix things up a little bit for you, and we're also going to have our guest host, sarah Lordback. Sarah, she's been doing a series for us on legal tech and you're lucky to have her, and so am I. So here we go.
Speaker 1:This one's going to be about eDiscovery. We have talked about this one before. It's a huge part of any litigation practice, as anybody in litigation knows. It's been transformed quite a bit by Technology. Assisted Review Tools, or TAR is the acronym. There. Ediscovery is its own specialty. We've got eDiscovery experts on staff, often who know all there is to know about the technology, the standards, the processes or processes and practices or practices. Every litigator needs to understand how e-discovery tools work. You've got to be able to answer questions around the approach being used, why a specific approach was chosen. It's good to understand the reliability of the assisted review, a human oversight, implementation too. You know it's funny, we're talking about humans. That's been happening more and more in my conversations recently.
Speaker 1:As we talk about these things, we say, well, how are the humans doing? That was never a thing as far as I know. I don't know maybe if I worked at NASA or something. It's a lot of jargon here. It's a lot of acronyms, specialized terminology. That's what jargon is and it's always changing because technology is changing. Also changing are the legal standards and ethics ethics, competency, issues around technology. Because so much of our work is done by a technology vendor that has specialized tools. It can feel like your review is based on blind faith and that finding the pieces to support your case requires you to rely on dumb luck. Can you do more than pray to the document? Gods, yeah, for those who celebrate, but I'll tell you what. Listen as Sarah Lord interviews Lenora Gray. But I'll tell you what. Listen as Sarah Lord interviews Lenora Gray.
Speaker 1:Lenora is an e-discovery expert and data scientist, skilled in auditing and evaluating e-discovery systems. In her role at Redgrave Data, she designs and analyzes structured and unstructured data sets, builds predictive models for use in TAR workflows, implements automation solutions and develops custom software. Before Redgrave, she spent 12 years as a paralegal role in which she managed discovery teams. She's currently pursuing her MS in data science from Johns Hopkins University. She earned her BS in computer science from Florida Atlantic University and, like I said, I welcome back Sarah Lord to guest host.
Speaker 1:Sarah is a former practicing attorney with a decade of experience in data analytics. She applies her experience in law firms and businesses to explore and address the cultural and practical barriers to diversity in law, supporting value creation through legal operations and client-first, business-oriented practices. In her recent work as managing director of legal metrics, she led a team of experts focused on providing the tools to support data-driven decision-making in legal operations and closer collaboration between law firms and their clients through automation and standardization of legal metrics. Sarah earned her JD from New York University School of Law. Also, you want to check out our bonus content. We're going to have that on YouTube and at litigationconferencecom and there you will see that Sarah makes it pretty clear that she doesn't love that initial review part of discovery. I don't know. I think that comes through. But check out the YouTube video on YouTube and at litigationconferencescom for some bonus content. And now here is Sarah Lord interviewing Lenora Gray, a data scientist at Redgrave Data. I hope you enjoy it.
Speaker 2:Okay, a data scientist at Redgrave Data. I hope you enjoy it. Okay, welcome to all you podcast people in podcast land. We are here to talk to Lenora Gray. I'm very excited about this discussion. Thank you for joining me. Thank you for having me. We are going to talk about tar and e-discovery, which I know everyone loves this topic, so it's going to be very exciting. So let's just start with the basics. How would you describe technology-assisted review for those who are new to litigation or don't regularly work with the review process?
Speaker 3:Well, technology-assisted review is a way for us to use computers and machine learning algorithms to assist to make document reviews more efficient. So we use a form of machine learning that's called supervised learning. We call it supervised because it learns by example. So basically, let's say you have a large collection of documents that might be important for you to produce to the other side, collection of documents that might be important for you to, you know, produce to the other side and you don't want to review them all one by one because it would cost you a gazillion dollars and take you so much time to do so. So you can have one or maybe a few attorneys, take a small set of those documents, code them for whatever issue you have relevance versus non-relevance, privilege versus non-privilege or any you know categorical issue that you can, you know, devise privileged versus non-privileged or any categorical issue that you can devise and then the supervised learning algorithm uses that training data that your attorney's coded and it produces a statistical model that we call a classifier. The classifier can then be used to predict the coding of the rest of that document population. So it'll make a statistically valid guess on what the coding would be if the attorneys had looked at it and then the classifier can also give you a confidence level on its prediction. So and then we can rank those documents by that confidence level. Let's say, if it has a confidence of one, I'm 100% sure that this document is relevant as opposed to non-relevant. We can rank them by the confidence level and we can bring those documents to the attention of the attorney so they can review the relevant ones first. Or, if we are very, very confident in the accuracy of our classifier, we can just produce them as they are without the attorneys having to look at them at all. So it basically saves you a lot of time and effort when it comes to conducting a document review.
Speaker 3:Okay, so let's say I'm a litigator beginning a discovery process. What are the primary TAR options available to me? And the first phase, the active learning, which is the supervised learning, iteratively sorry, that's a hard word for me selects documents for training purposes, and reviewers code those documents and the resulting coded set is used to train and produce the classifier, like we said before. And the second phase once we are sure of the accuracy and we like what the classifier is doing, it's used to classify the rest of the documents, and then the rest of the documents might get reviewed by attorneys or, like lower cost reviewers, contract reviewers instead of senior attorneys, or just a privilege screen, or they might be produced without further review from anybody. There's also TAR 2.0, which omits the second phase.
Speaker 3:So what it does is it in each new iteration we produce a new classifier and as documents are coded in the collection, the coded documents are used to produce a new classifier, and this keeps going and keeps going until an agreed upon number of relevant documents have been found in the collection. It's usually around 80% for legal cases and, as I said before, there are some hybrid options. But these are the two basics and ultimately, the choice between choosing TAR 1.0 and TAR 2.0 should be guided by specific objections, constraints and characteristics of your project, your document collection, what it is the issues are in your case. So consulting with e-discovery experts at this juncture kind of like what we do at Record Data they can help you determine the most appropriate approach, as, like we would know what to look for at this stage that would have downstream effects in the process going forward.
Speaker 2:These tools incorporate generative AI.
Speaker 3:Some of the major players in the eDiscovery platform are currently incorporating Gen AI into their software, not just for TAR review but for like deposition prep, summarizing some of those documents. But as far as incorporating Gen AI at BreadGrid Data, we decided to kind of take a step back and make sure that generative AI, when evaluated against the tools we're currently using in eDiscovery, that it matches up and is a viable option for our clients. So what we're doing now is we're currently doing research to compare generative AI against our current baseline machine learning algorithms like logistic regression and support vector machines, which are big statistical words, but it's basically the computer algorithms that we currently use and we're comparing them to GenAI. And if GenAI proves to be at least as effective as human review or like a TAR 1.0, while it's still comparable in time and cost, then it could be used in place of the current algorithms we have.
Speaker 3:Yeah, in the resources I provided, there's an article that my colleagues just wrote Jeremy Pickens and Tara and Will Zett that they published in the Sedona Journal and it's outlining the difference between using traditional machine learning and Gen AI in TAR 1.0 workflows. Also, gen AI has potentials beyond the you know regular traditional machine learning because it can be used for privilege review and summarization of groups of documents and it has a huge capability to point out relevant passages and also provide explanations of why it classified a document a certain way. So it has a leg up if we can prove it's as valid and effective as what we're currently using. Also, genai is being used behind the scenes to improve language processing tools for like name and entity extraction, redactions and related tasks like that. So it kind of has a lot of potential, but we want to be sure that it's as effective as what we have now.
Speaker 2:You co-authored a white paper called Beyond the Bar Generative AI as a Transformative Component in Legal Document Review. Is that the article you were just referring to, that your colleagues also co-authored, or is this a different one?
Speaker 3:Oh no, this is a different one. We're constantly doing research and actual scientific experimentation on these tools at RedRigData. So our paper Beyond the Bar was actually submitted to a scientific conference recently. We beefed up all of the mathematics to make sure it was a scientifically strong paper so that we could submit it. So it would be beyond just a white paper with our opinions on what was going on.
Speaker 3:So for this paper Beyond the Bar, which we did a collaborative research with Relativity, which is one of the big e-discovery platform players, we designed a head-to-head comparison of first-level human review and generative AI. And I say first-level review because when you do a fully human review it's usually done in two stages. In the first stage, you'll have a set of contract attorneys, review and label, you know, a set of documents, and then in the second stage, more senior attorneys will come in and check what they have coded to make sure that they have done the right thing, and there might be, you know, case attorneys that you know guide them on that path. In our experiment we provided a review protocol to both the contract attorneys and to the generative AI algorithm as like a prompt, like you do in ChatGPT, and both of those, both of their coding results were compared against the gold standard of what the senior attorneys would have coded the documents and in the results we saw that the generative AI system had much higher recall than a human review, that the generative AI system had much higher recall than a human review. It found 96% of the relevant documents as opposed to 54% that were found by the humans.
Speaker 3:E-discovery standards usually require around 80% recall, so this is very, very good. However, the precision measure for generative AI was lower than the human reviewers. We had 60% precision as opposed to 91% precision from the humans. So precision is a measure of the documents that you found that you coded as relevant. How many of them were actually gold standard relevant? So, like, how precise is your classification? Precise is your classification? So, while we were able to get a much, much higher recall, it's incurring extra costs in producing documents that are not relevant along with the relevant ones, although it's finding most of the relevant ones. So there's more work that needs to be done. But we have numerous avenues in improving the performance. Like we didn't do any fine tuning of the prompt that was specific to the matter. We, you know, did the one prompt and so there's much more to come in experiments around Gen AI. For document review For future view.
Speaker 2:here it seems like we may be on a path to a quicker review process, a more affordable review process and, for those of us who do not enjoy first level review, a less emotionally painful process as we enter into the future of discovery.
Speaker 3:And also a more robust review, because now the general AI will be able to tell you why it made a decision and also point out the passage of the document that it you know was most relevant to its decision. So it kind of like gives you these you know was most relevant to its decision. So it kind of like gives you these you know breadcrumbs in the actual documents to tell you why I made this determination. So it's actually. You can pinpoint where it might go wrong and where it's going very right easily without you know with actual text. So it helps you a lot. So let's say that there's some passage, some stock passage or some boilerplate language in the documents that you haven't seen so far, and it's picking it up and you know, making it relevant when it shouldn't be. You can, you know, flag that and say just because it has this doesn't mean it's relevant. So that's not going to be, you know, considered in the statistical model. Go back and tell me if this is really relevant, based on the rest of the text.
Speaker 2:That to me sounds very exciting. So when will TAR be replaced with fully robotic review, which I am going to call fur, because why not?
Speaker 3:Is this one you made up, because I've never heard of it.
Speaker 2:Yes, I wanted to catch on, so let's all just start calling the next phase fully robotic review, aka we'll have tar and fur. Now, when the fur gets in the tar it might get a little messy, but I don't know.
Speaker 3:Oh, my gosh. No, this concept was so far off our radar it didn't even have a name.
Speaker 2:I'm the first. I did it.
Speaker 3:I think it's important to consider that, while TAR is about technology, the success of a TAR effort depends heavily on the larger e-discovery process around it, and that larger e-discovery process requires attorneys and specialists and technologists and people that know how to do the work.
Speaker 3:The machine isn't going to do everything, so the negotiations around, for instance, how the review collection is defined, are supremely important, like negotiating custodians, date ranges, file types, keyword filters.
Speaker 3:This can affect both the volume of the data that has to be reviewed and it can also determine how much of the data can be reviewed, be reviewed, and it can also determine how much of the data can be reviewed through TAR, because there are certain file types that TAR is just not, you know, fitted for. Certain decisions made on the request for production can affect the difficulty of the classification task, so it can make it more difficult to TAR to determine whether it's, you know, relevant, non-relevant, privileged, non-privileged and agreed upon deadlines strongly affect which TAR workflow you'll be able to use and which one is most practical. Some of them take, you know, longer to get the classifier right than others, so you know that also depends. So humans will be needed for each of those steps in the process, so the time we will have a like a set it and forget it review process, I think is a long way out.
Speaker 2:Okay, so TAR is still our best option and, recognizing that the technology is always evolving, are there key terms that litigators need to understand and key questions they should ask to select the right tool?
Speaker 3:Well, sometimes the TAR process are called by names other than 1.0 or 2.0. So practitioners would want to be aware that there are other names for it. I would actually encourage them to you know, figure out what exactly they are doing and how it compares to the standards that we have for TAR 1.0 or 2.0, because there might be some hybrid process that your provider is using that you don't know. The TAR evaluation metrics are important and knowing how those work and what they signify. You know the TAR evaluation metrics are important and knowing how those work and what they signify, the recall and precision numbers. Recall is the percentage of relevant documents that you were able to find. Precision is how accurate your classifier is of the documents that you found that you said were relevant. How many of them were really relevant, based on gold standard. Those will help you set objectives for your TAR process and do the negotiation on the front end. You know what level of recall are we looking for here. You would want to know what a confidence interval is and how they work, which is basically a range of values that the true value is estimated to be in at a certain confidence level. So we'll say we found 96% of the relevant documents and we're 95% confident in that number. It's between, you know, 95.5 and 96.5. It's somewhere in there. We think it's 96. And we're 95% confident about that.
Speaker 3:Terms that have to do with defining the review collection, such as culling, which is when you reduce the document population based on the date range or file type or keyword searches. A lot of that happens, you know, happens in the processing stages and things. Deduplication is brought up a lot, which is where we replace multiple identical copies of a document in a document collection by just one representative copy. The nesting, which is where you remove a bunch of system files. When you collect documents from a computer system or files, it comes with all of the accompanying little junk files that you don't need to review.
Speaker 3:So some of those are important to know when you're negotiating TAR protocols and I also have provided some resources to get anyone starting and learning about TAR and the glossary for TAR and for a deeper understanding of TAR and AI and e-discovery practice. We also have training programs specialized that are ideal for legal and IT professionals and Red Grave Data provides them through our education and training program. So if you want to do a deep dive on the essentials, the best practices and ethical considerations. Consider going to our website and looking it up, because we've got some really good stuff there Fantastic.
Speaker 2:I've heard about system review audits and the value around those, but I don't really know when they should be considered. Can you tell us a little bit about, from a best practice standpoint, when we should be considering review system audits?
Speaker 3:Well, tar processes and systems have to be evaluated at a couple stages To ensure that the classifier is bringing back the correct documents. You will, you know, evaluate the classifier when it's created. So when you have in like a TAR 1 workflow, you have those attorneys that go and code the small set of documents, you use it to build a classifier and then you check if the classifier is doing its job correctly, if it's pulling back the correct relevant documents, and then you would also evaluate it at the stage where you have to certify to the other parties that the result of your coding has met the established objectives. We recovered at least 80% of the relevant documents from this collection. We're certifying it using this, and usually this is done by random sampling. So we'll select a subset of the documents from the entire set at use our statistical equations to estimate whether or not the TAR process was effective, whether or not it's bringing back a requisite amount of the relevant documents. However, random sampling is just one of many methods we can use to do this and it's how both and it helps us understand how well both the technical and manual parts of the TAR process are performing. But beyond the actual mechanisms of evaluating a TAR system. There's also the issue of visibility, or the ability to monitor and understand and predict outcomes of the TAR process. So this goes beyond the TAR system and encompasses like the broader capacity to oversee the review's progress, the efficiency, the effectiveness.
Speaker 3:Our approach at Rigor Data is usually through our advanced software capabilities where we like prepare dashboarding and predictive analytics to provide users like with an at-a-glance view of what's happening in your review right now and analyzing trends in your data so we can aid in resource allocations. Are there certain sets of documents that you'll need to increase your contract attorneys for? If you increase the number of contract attorneys, can you finish the review faster? Will it diminish effectiveness? We also use them for forecasting for costs, for better budgets and for cost efficiency and to optimize the review process Because sometimes you can like switch modes. So let's say at one point you're reviewing like full families. What if I switch to only reviewing parent documents? Or what if I switch to reviewing this set person, this one? How long would it take me to finish the TAR process? So we can help you detour to a different lane if it will make the process more optimized and we use our software and dashboarding for identifying potential risks in the e-discovery process that might cause issues downstream.
Speaker 2:Okay. So when I'm running my discovery process and I'm receiving from my vendor the periodic updates that tell me how the review is progressing, from a recall, from those perspectives, that is considered a system, a review system, audit, just that, that process.
Speaker 3:Well, it depends on if you're auditing the technology, which would be through the random sampling and, you know, evaluating the classifier itself, or if you're evaluating the effectiveness of the process, like is the process most optimal? Are the costs optimized adequately? Are your resources being allocated adequately? So there's a whole ecosystem around what the classifier itself is doing. That also needs to be evaluated for the most effective TAR process as a whole and not just the part where the documents are being coded, because there's this whole ecosystem around it that affects it greatly that usually attorneys don't have much visibility into. But it's our passion at Red Grave Data to help people make more informed decisions about their review processes through showing them, through predictive analytics, custom software on top of your review platforms and your search platforms, and showing you what is happening in the data, what is happening in your review, if not real time, near time, so that you can make decisions on the ball about whether you need to switch processes or switch lanes or detour to another process to help you out in optimizing your review process itself.
Speaker 1:Gotcha.
Speaker 2:Okay, and optimizing your review process itself Gotcha, okay. So the kind of data that I am used to seeing in a review process is just one set of relevant information that could be referred to when people use the phrase review system audits. But there is this whole other set of data that also can be encompassed, whole other set of data that also can be encompassed, and so it sounds like when it's it's another like term of art where, when people talk to you about review system audits, you need to clarify and make sure you know what they're including in that and that you're getting all of the information required to make informed decisions Exactly and it's shown to you in a way that you can actually make decisions about it.
Speaker 3:Let's say, for instance, in a regular review, the collection itself is growing all the time. Sometimes you're adding more custodians, sometimes date ranges change or search terms change and things like that. How is that going to affect the collection? Search terms change and things like that. How is that going to affect the collection? Like, in some cases, you don't actually have the hard data as to how that's affecting the collection or how long the review will go. But with these systems that we place on top of you know, let's say, a relativity instance where we're pulling numbers and reporting from your relativity instance and placing them in the dashboard where you can see what's happening with the data, I think it makes it a lot more of a data intensive process is your decision-making becomes.
Speaker 2:Okay, so that makes a lot of sense and it sounds like it's something you really want to incorporate into any review of significant size. Material size.
Speaker 3:Absolutely, because you know like one misstep can cost you so much as far as time and effort and attorney review time and taking your senior attorneys away from strategy and you know back into. You know doing samples and I think our approach here at Red Grave Data is one that has great promise. You know doing that last 20 percent that the review and the e-discovery platforms don't do, and also presenting it to our customers clients so that they have the specific information they need in each matter to make the most data-driven decision possible.
Speaker 2:Thank you so much. I do want to encourage the listeners. If you haven't read Beyond the Bar Generative AI as a transformative component in the legal document review, I would encourage you to do that. I found it really interesting and informative and, even if you don't understand everything in there, getting that high level understanding from the article on where the industry might go and really seeing what kind of options are being explored, I think is a fantastic use of your time. So if you're looking to inform yourself a little more, I encourage you to read this article.
Speaker 3:And if you have any questions, feel free to reach out. I'm happy to answer any questions about the paper or any of the processes we talked about today.
Speaker 2:I do realize calling it an article instead of a paper is definitely a misnomer. It is definitely a paper. It is beyond article length for the kinds of articles you're used to getting, but I really did enjoy it, so it's fantastic. Thank you so much for sharing that.
Speaker 3:No problem, thank you.
Speaker 2:We are also going to be making available a TAR reference that Lenora was kind enough to pull together for us, so that will be available on Emerging Litigation Podcast website and I want to thank Tom Hagee and the Emerging Litigation Podcast for letting me guest host once again and the Emerging Litigation Podcast for letting me guest host once again.
Speaker 1:That concludes this episode of the Emerging Litigation Podcast, a co-production of HB Litigation, critical Legal Content, vlex Fast Case and our friends at Losty Media. I'm Tom Hagee, your host, which would explain why I'm talking. Please feel free to reach out to me if you have ideas for a future episode and don't hesitate to share this with clients, colleagues, friends, animals you may have left at home, teenagers you've irresponsibly left unsupervised, and certain classifications of fruits and vegetables. And if you feel so moved, please give us a rating. Those always help. Thank you for listening.