Emerging Litigation Podcast
Litigators and other professionals share their thoughts on ELP about new legal theories, new areas of litigation, and how existing (sometimes old) laws are being asked to respond to emerging risks. The podcast is designed for plaintiff attorneys, defense counsel, corporations, risk professionals, litigation support companies, law students, or anyone interested in the law. The host is Tom Hagy, long-time legal news writer and enthusiast. He is former editor and publisher of Mealey's Litigation Reports, Founder and Editor-in-Chief of HB Litigation, co-owner of Critical Legal Content, and Editor-in-Chief of multiple legal blogs for clients. Contact him at Editor@LitigationConferences.com.
Emerging Litigation Podcast
DOJ’s AI Litigation Task Force and What It Signals for Corporate AI Governance with Adria Perez
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Regulators are no longer asking about AI principles — they want proof. Legal teams must show how their controls work, withstand scrutiny, and protect privilege.
In this episode, I speak with Adria Perez about the evolving landscape of AI policy and what it means for corporate compliance. Our conversation focuses on the U.S. Department of Justice’s AI Litigation Task Force and the growing expectation that organizations demonstrate real oversight, documented controls, and responsible use of AI. Adria shares practical insight into the challenges and opportunities AI presents for legal departments, as well as how governance frameworks can help companies adopt these tools with confidence.
Adria Perez is a partner in Reed Smith’s Global Regulatory Enforcement Group and a former member of the Volkswagen AG Independent Compliance Monitor and Auditor team. With deep experience at the intersection of enforcement, compliance, and monitorship expectations, she brings a perspective that in-house counsel and compliance leaders increasingly need as AI oversight rises to the board level.
During our discussion, we cover:
- Artificial intelligence as both a strategic asset and a compliance challenge
- The DOJ’s AI Litigation Task Force and what it signals for corporate oversight
- Best practices for responsible AI use, governance, and internal controls
- AI’s growing role in whistleblower complaints and internal investigations
- Workflow efficiencies and operational advantages AI can deliver
- Protocols and employee training essential for safe, effective adoption
- Communicating AI initiatives, safeguards, and successes to corporate boards
If you’re responsible for compliance, investigations, or AI governance, this episode offers a clear look at how legal teams can adapt and lead in a rapidly changing environment.
Special thanks to Adria Perez for sharing her insights and making the time to join us.
Tom Hagy
Host | The Emerging Litigation Podcast
______________________________________
Thanks for listening!
If you like what you hear please give us a rating. You'd be amazed at how much that helps.
If you have questions for Tom or would like to participate, you can reach him at Editor@LitigationConferences.com.
Ask him about creating this kind of content for your firm -- podcasts, webinars, blogs, articles, papers, and more.
Setting The Federal AI Stage
Tom HagyAdria Perez, welcome to the Emerging Litigation Podcast.
Adria PerezThank you, Tom. I've been looking forward to it.
Tom HagyOkay. Yeah. And I stepped all over you. That's fine. Uh we can keep moving. So I'll just give a brief introduction here about what we're going to talk about. So federal AI policy is moving quickly from broad print principles to real legal action. In 2026, the the DOJ announced an artificial intelligence litigation task force. This is a group created to challenge state AI laws that the administration views as inconsistent with a national minimally burdensome framework. How am I doing so far?
Adria PerezYou're doing great, Tom.
Tom HagyOkay, thank you. I can read. At the same time, the DOJ prosecutors have also signaled what they expect companies to be able to show about their AI controls, risk assessments, governance, and evidence that AI is is being used responsibly and lawfully. We want that. So what does all this mean for in-house counsel who are trying to build uh AI governance that holds up under scrutiny from the from the federal government, whether the issue is compliance or internal investigations or AI misuse like deepfakes? And um just quickly, you are a partner in Reed Smith's Global Regulatory Enforcement Group, and you're a former member of the Volkswagen AG Independent Compliance Monitor and Auditor team, right?
DOJ Expectations For Corporate AI
Adria PerezThat's exactly right. Thank you, Tom. And I think that's a great summary of all that's been happening and just kind of taking a step back. So obviously, AI is a tool, it can be used for you or against you, and the government understands that. So with the litigation task force that was announced with the recent executive order, it's really the Trump administration saying, okay, we don't have a federal AI law right now, but we have very progressive and proactive states out there who have come up with their own laws. So let's try to pause all of that for a moment and try to figure out whether there can be a comprehensive effort or law that maybe at the federal level that could be useful. And we've seen that actually with the EU AI Act, where there is a comprehensive law that would apply to all of the EU union states. So that's sort of, I think, where this is headed. But of course, you know, and we've seen in the last year or so that executive orders come out, and then there's some time of just trying to figure out what are the best next steps, and that's what we're waiting for right now. But I will say the federal government has been focused on AI for a few years now, even under the Biden administration. So under that administration, there was an update to the DOJ's evaluation of corporate compliance programs. And what that does is it's saying when the DOJ reviews a company and its compliance program when deciding whether to prosecute or deciding about a resolution, that the DOJ looks at how the company uses AI. So, how are you using it in your compliance program? Do you have the right controls to mitigate bias, to make sure that the AI tool is giving you something that's truthful and verifiable? And there's questions in the evaluation that can kind of guide companies when they are using AI for compliance purposes, to say, okay, if I ever get in front of the DOJ, what do I need to show them? Whether it's documentation about protocols or whether it's use examples to show them that there was an effort to mitigate all of the errors or bias that can come out of an AI tool. So all of this is to say that the DOJ is well acquainted with AI. They use AI themselves, right? There was a Biden executive order that came out a few years ago saying to all the federal agencies and departments, you've got to come up with your own list of how you use AI and which tools you use, and you need to make it public. And the DOJ still does that. They have it on their website, they have an inventory. It's a long, long spreadsheet of all the tools that's used and how they use them. And that's going, that's interesting in a way, because I do think there is sort of this um perception that law enforcement uses AI only for what I would call hardcore crimes, right? Which would be, you know, the types of things you would see at a state level. Um, but that's not true. Uh there are plenty of AI tools that the DOJ has disclosed that's used for white-collar crimes or for, you know, instances where companies commit crimes, whether that's, you know, looking at financial records and summarizing those or looking at travel. I've had clients who have called me and said, look, the Department of Homeland Security just showed up at our office and wants to know why our employees have been to China 12 times in the last 18 months. You know, that's through AI. They're able to summarize across sources in a better, more enhanced way than they ever had before. And they're going to continue to do that in order to see if they can predict or detect crimes.
Federal Vs State: Collisions Ahead
Tom HagyYeah. Yeah. You can see room for just tremendous advantages in terms of identification and research and digging in. You can also see room for just crazy levels of abuse. Um, so that's I guess that's with with a lot of things, uh, a lot of inventions and transformations. Certainly the internet was that way. It was so awesome, and yet, oh my God, so crazy, you know. Uh online banking, amazing, but just a new way to steal. But um, so that's good. So, you know, you so you mentioned the the uh the DOJ memo, the executive order, and then there's an evaluation of corporate compliance programs. And I'll put a link to those in the show notes so people can can read them uh for themselves. Um so so the day so it's aimed at state laws, and and the executive order emphasis emphasizes a single national framework, so the legal risk map for companies may shift as federal and state approaches collide and as and as policies might shift, because I think you know that happens. I'm sorry. I didn't mean to laugh at myself. Um so so what should in-house uh legal teams expect this year, uh both from the DOJ federal posture and from some of the state activity that we see evolving?
Adria PerezI think you're gonna see something that's akin to what we saw with ESG, and I know ESG is not a popular acronym these days, um, given the current administration, and that's another policy change that you and I can can laugh about, right? But but there was a time um before this administration where it was really difficult to understand where the enforcement efforts were gonna come from. Is it states? Is this federal? And then when the Trump administration came in, it was the states, you know, particularly the AGs in New York and California saying we're not giving up on ESG, and we still have laws that you must abide by, particularly in California. Um, and so there was this unpredictability of, okay, well, the federal government may not be so focused on certain things, but the state, government, and law enforcement and authorities are. And I think hopefully our clients are sort of used to that kind of predictability because unpredictability, I should say, because you've got to figure out where your highest risks are and really put your resources into that. So when it comes to AI usage at a company, you've got to see, okay, where are our risks? I'm guessing because California is such a large economy, that most companies are doing business in California. So then you need to look at, okay, what are our risks there? And then how can we mitigate those if the federal government is not going to be as active in enforcement? But I think what you're seeing from the Trump administration is look, there's unpredictability here, and we want some control over it. And so we're going to look at it and having control over it from a federal perspective actually could branch up other risks, right? Increase other risks that companies need to be aware of. So, but but I will say this. Um despite government enforcement, and I do have cases right now where we have employees at companies who are using AI to commit crimes. I mean, it's it's definitely happening. Um, but besides enforcement, you really have to think too about whistleblowers. And you've alluded to this already, Tom, about deep fakes and and misuse. But, you know, what I try to share with my clients is we got to change our mindsets a little bit. You know, we used to be able to see a photo and we could tell right away if it was photoshopped, right? Remember when Photoshop was a was the new rage? Yeah. Right. Now it is so hard to see an image and know whether it's AI generated or not, because the images are so good. I mean, they're so enhanced these days. And even yesterday, I was um scrolling through through some videos with my daughter, and we were laughing about this one video, and she was like, Oh, that that's AI generated. And I was like, Are you sure? It looks so real. And I literally had to scroll to the bottom of the comments to see that it was AI generated. And that's the problem that we have. And so when you have really active whistleblowers, which our government is providing a lot of incentives for there to be whistleblowers, you know, they get a bounty in many respects now, more in more topics now than ever before uh through the federal government. So there's there's incentives, and now there's this great tool that can provide evidence or create evidence that looks real. And so a lot of what we're doing right now is trying to educate our corporate clients on, you know, what are ways that whistleblowers can create evidence? And it's even can be mundane things that get you thinking, that get you the issue spot. Like, for example, I saw this great post, um, and it was actually referenced in a New York Post article where DoorDash customers were taking a picture of the food they received and then using AI to make it look as if the food was not cooked through or maybe was rotten. And then they would ask for a refund through the DoorDash app using that AI generated photo to say, look, I didn't get my food was not right, you know, it wasn't cooked through or it was rotten, so that they could get a refund. And that's stealing, right? That's straight up that. And so it's interesting how these mundane examples that are happening every day get you to think about the bigger examples that can happen in a corporate context.
Whistleblowers, Deepfakes, And Fake Evidence
Tom HagyYeah. I mean, it's amazing. You're what you're looking at right now, you're looking at me. I'm actually AI because I mean, who would believe it could look that good? Um clearly it looks like when he went to shave this morning, he forgot to put the uh little clip on the razor, and he I just gouged my uh chin. Oh yeah, they said a nice full chin. Anyway, that's a whole separate thing. I think I'm gonna AI it back in. But you mentioned, you know, the younger generations, you know, I think um, I think I'm pretty in tune to AI. I use it a lot. Um and I actually uh I did a uh I think I might have sent you I did a a podcast with JD Supra about uh using AI in drafting. So because attorneys don't always love to write, but they love not love to talk, but they are good at talking. Uh you can turn a lot of your talking into drafts. And but I, you know, I talk about be careful, check everything, check everything. It's never gonna write a final product for you. But anyway, so I sent my daughter who loves to snowboard, I sent her a picture from Russia where the snow had fallen outside of Moscow or wherever it was, so high it was at the top of like 10, 15-story apartment buildings. And I'm like, wow! And I believe it, you know, the climate's going bananas, why not? It's cold up there. I don't know where they were, Siberia, maybe. And I thought, and you know, Russians were were uh Russian kids, I think, were uh snowboarding off the tops of these buildings. And I thought, this made me actually, you know, really like the Russian people. Um, but it looked great. And I sent it to her, she said, Dad, it's AI. Like, okay, caught me, you know. It it happens a lot, and it's happening a lot uh with with people. That was my first concern about how how cool those videos can be. I love to see like when a wolf and a deer are hanging out together, you know. Um, these are all very sweet, but yeah, I was always concerned when you'd see like a head of state giving a speech about something. Like, how do you know? Um, and then people in even less sophisticated company uh countries see that. We're I think I feel like we're becoming less sophisticated, but but when you when other countries people would see it, they'd like, oh yeah, our president said let's uh let's do that. So anyway, I feel like that I feel like the stuff should be marked, uh like a cigarette label almost, right? And if it doesn't have that, that's right, that's right.
Adria PerezAnd I think there are some requirements on certain on certain platforms that you have to mark it. But like I told you, I had to like scroll down pretty far through all these hash flags to actually find it in the video that my daughter and I were viewing. But but the reason why I'm laughing is because it's again, these images are so good and so enhanced that it is and it it makes you feel a little not so sophisticated and not so smart when you're especially with with me, when my 12-year-old is telling me, don't you see it's a lie? But but it's again, it's it really grabs you. And some of these videos and images, it really produces emotion, right? So you're sitting there like shocked or you're or you're angry or what have you. And and that's what I mean, I think breeds some of the concern for our clients is they're thinking, you know, I could get something and I'm not even sure if it's real. Like, what should I do next? And I'll give you an example. Like a few weeks ago, I had a client who received a tip that one of their um blue collar workers may have been um in conversations online with uh with miners, and it was not an appropriate conversation. It was a licit conversation. And so we we received images of the conversation, like the chat. And I wasn't sure at first whether it was actually real or not. Um, because I, again, AI quality is is high. And so we had to do some things to make sure that it was actually legitimate before we sat down and and talked to that person. And and so we went through um with the help of their IT department, but we also will hire forensic um consultants to help us with this, is looking at metadata and trying to understand, is there some sort of indicator in the metadata that would show that it was actually um generated on a platform that is a, you know, is an AI platform or what have you, looking at it to see, okay, because we don't want to assume every single time someone sends us an image or a video that it's accurate. We we need to do our own due diligence as part of the investigative process. And most of our clients, of course, have an investigative protocol that they use. But just like with a paper document, you need to make sure and you need to double check that it's actually accurate before you start assuming things and and bringing up allegations. So it's the same thing, it's just in a different format and can and honestly takes more time and takes an expertise that many companies may not have, which is why we use consultants sometimes.
Tom HagyYeah, I feel like somebody must be working on some kind of an app that you can run something through uh using AI, probably, to say, you know, is this real or is it not? You know, right now it's like it's uh what do they call it? Not there's a crowdsourcing. You know, it's like right now we're relying on each other to say no. Anyway, we didn't see that. So what uh so moving on to really kind of practical things, what specific controls should in-house teams have to so they can demonstrate that they're using AI responsible, they're overseeing the use of AI responsibly across their organization.
Adria PerezIf I had the ideal situation with a client and we were in front of the DOJ talking about the client's compliance program, and the DOJ started asking questions about AI usage, I mean, I would love to have a protocol for how AI is used in the compliance program. And what that means is we we even have protocols here at Reed Smith, for example, of how we use our AI platform for litigation, for example. And it will outline, you know, when is Gen AI, you know, useful in a litigation at what stages? When should we not use um generative AI for a litigation? For example, you know, when we're doing a privilege review, we would prefer to have human eyes on those documents to make sure they're privileged. Right now, we don't believe using Gen AI is the best use of that tool to detect privilege. And so, and it is nuanced in some ways. And so we want a human to be able to do that, and that's in the protocol. Also talks about, you know, ethical considerations, especially for lawyers. We have rules we have to follow. And then it talks about you know other tools. You know, you know, here at the firm we use Harvey for AI purposes, but we also have other tools that we can use and e discovery when we're reviewing documents or reviewing videos, and it kind of just outlines the different tools and what they're good for so that you know you're not using the wrong tool for the wrong task, basically. Right. And that's what I like to see. Now we also cover how do we mitigate bias, right, and errors in our protocol. And whether that's through a protocol or whether we, you know, we've excuse me, like a prompt, or maybe we've seen evidence of, you know, when you try to do this task with genitive AI, you're gonna get a lot of, you know, false positives. So don't use it, right? That's what I would love to see and be able to show the DOJ, look, we've taken a lot of thought, we've taken a lot of time, we put this together, this is what we follow. We may even audit sometimes how we use these tools to make sure we're not, you know, generating a lot of biased false positives. I mean, we've seen that with clients, you know, because there are cautionary tales, right? Of clients using AI to review um resumes or candidates. Because what, you know, we've seen years ago is a very top big brand company here in the United States decided, okay, we know who our superstars are in the company. Let's take those resumes from those superstars and let's put them into an algorithm so that when we see candidates that fit those, you know, criteria or those descriptors, we're gonna get more superstars. Like, so at a top high-level view, that makes total sense, right? That's that's ingenious. But what was happening was the candidates that were actually taken out of that algorithm and got to the next step for a screening, it was not a very diverse sample. It actually was some all these false positives started to happen. And it was basically we were they were looking at candidates that were all basically the same, and it wasn't giving them what they wanted. And so it led to disputes and litigation. And so testing your algorithms and your AI usage is a good idea to see, okay, are we creating our own false positives here? Are we creating a a situation where bias is going to be much higher than what we really want?
Tom HagyOkay. Yeah, I've got so many questions I'd like to ask, but I I can't. Um I'll just say, I'll just say uh and you don't respond to these, but no, for example, this isn't this isn't uh political, but you know, so whatever the the whole Epstein investigation win, you know, when I heard that there are like three gazillion files and and photos and videos, and and it's gonna take us forever, I'm like, are they not using AI? You know what I mean? It seems like that's an ideal. So moving away from moving away from that particular instance. If you've got a data dump, like you said, you get so much data, you get email, you get photos and and video. You guys, you you have a system, and I don't know if it was you mentioned Harvey. Is that something that you all use? Is that what does that?
Adria PerezYeah, so we have our EA platform here at the firm, it's called Harvey. And Harvey is Gen AI, and so you can generate, you know, all sorts of different things, whether it's summaries, you know, interview questions, you know, summaries of depositions, if you want to do summaries of financial statements, which I did the other day, a chronology, you can do all sorts of things through Harvey, but then we have other tools when we get client data, um, whether it's you know pictures or video or documents to help us review those. So, you know, you mentioned you know, the Epstein files. You know, one thing that I need for one of my cases is we do need facial recognition for videos. We have over 2,000 videos that we need to review. Um, it would take hours and hours and hours for me to have a team of reviewers do that. I'm concerned also that it won't be as efficient. And so we are test, um, we have a test case where we're trying to see can we use an AI tool that will have facial recognition on these videos so that I know exactly when a certain person appears in these videos.
Tom HagyYeah.
Adria PerezYeah. And so that's definitely something we are using and looking into to be more efficient. I I know there's this idea that in the legal industry, you know, where you bill by the hour mode. Mostly that using these tools is taking us away from profit. But I I don't feel that way at all. I think that's a complete misunderstanding. Because I don't think actually you can be really effective or efficient looking at over 2,000 videos with hundreds and hundreds of hours of time when it would be easier and better for our clients and for us to be able to detect when a certain person appears. And then I can have a human look at it and say, is this really helpful for our case or not?
Tom HagyYeah. When is that hourly thing going to go away?
Speaker 1You know, I I've been doing this for over 20 years, Tom, and that is a that's been a question for as long as I can remember. And um I think it's gonna be a question for a while.
Tom HagyI'm just gonna go out on a limb here and say, I think it's just so stupid. My limited experience with hourly rates when uh I'll work for clients and stuff, and they want to I won't even do an hourly rate. I just I can't I'm like something, you know, a lawyer, you could say something in 10 minutes that can change the you know uh a course of a company. Uh so that's 10 minutes, and you know, and they made a gazillion dollars following your advice or whatever, or they stayed out of trouble. It's just like it doesn't, I don't get it, but anyway.
Adria PerezYou're not the only one. That's for sure.
Tom HagyPlus, I don't know how you do it. What do you sit there? I know you do it, I know how you do it. But generally I feel like I'm watching like a chess match, you know, where you're always like, okay, I'm hitting this client's clock and this client's clock, and I know you guys have software and stuff for that, but that's that's just my pet fief on behalf of attorneys.
Adria PerezIt's funny that we're talking about AI, Tom, and then, you know, you're bringing out the billable hour conundrum, but both of them brainwash you. Like when you start billing hours, you think of time differently and you think of your daily tasks differently, right? Uh-huh. But using AI, same thing. Like once you start using AI and you start generating your own prompts and putting in prompts and getting better at it, you start to think in everything as prompts. So, so it's the same kind of brainwash. It changes your mind. Yeah, your mindset for sure.
Tom HagyYeah. I have found myself, yeah, it's funny how it does that. Even silly things unrelated, though. I'll be reading a hard hard copy magazine and I'll I'll want to flip the photo to the next photo. I'm like, oh God, my brain is rewired. But but yeah, you do start to think that way. There was interesting thread that somebody brought up on uh on LinkedIn about gender, uh gender differences in AI and the use of AI. And the question was uh around whether uh women uh this is all generalizing, of course, women are traditionally have more soft skills that actually make them better for writing prompts and following up and doing more detail work. And then I kind of I did a little bit of poking around because I thought because they were saying, you know, women can actually jump ahead. But uh what I well, all I can think about is men and learning anything new. Like like men never generally don't use instructions. I don't use instructions. Men will dive in, damn the embarrassment, you know what I mean? Where the where the some of the the stereotypical or a woman might be more careful and so might be better suited for AI prompts. Who knows? Um it's always fun to talk about.
GenAI In Litigation And Investigations
Adria PerezInteresting. Yeah, and I'm sorry to interrupt you, but that's really interesting. I have not seen that, but I I can tell you just from my own um experience, when I first started doing prompts, I kept starting off with the word please, and then why I'm giving that. Yeah. Yeah, and I don't know if that's if that's changes anything. I haven't tried it to see if it changes anything, but I would start off with please, and then I felt like I had to give a lot of background to help the platform sort of understand where I was headed. But it it's so interesting, like how prompts can really change over time and how you can improve yourself on prompts. But but also I I've I've wondered another thing. So I I received a summary of a deposition that was generated through our platform, Harvey. And um, and it was interesting because I was really asking for what are the inconsistencies. So this plaintiff has, you know, filed a complaint against my client, alleged a ton of things, right? And then she was later deposed a few years um later. And I was like, okay, well, let's try to see if we can get a summary of the inconsistencies. That would normally take an associate several hours to do, to read the deposition transcript, to compare it to the complaint, to figure out the inconsistencies. And then within 20 minutes, I got something from Harvey. And I don't know if it was because the plaintiff is is a woman, but Harvey put in a footnote to kind of describe to me how even though it's named some inconsistencies, it could be based on her gender. And I thought that was interesting because I did not ask that. And I thought it was very interesting that it said that. But again, these are these are things that you get to use AI more and more. You you kind of understand like what to look for and what and what could be helpful and what may not not be as helpful. But I literally looked at the deposition uh summary on a plane and I out loud said, oh wow, like because I did not expect it to have that footnote. But it it's interesting. And it is something I'm sure someone is studying at a very high collegiate level of what does it mean as far as gender differences with regard to prawns, but also maybe actually the platform signaling or actually pointing to gender differences. So it's very interesting.
Tom HagyI'll send you, I'll send you a link to that post. Uh, I forget the woman's name off the top of my head, but she she does a lot of posting on LinkedIn and asks good questions. I thought that was a fun one. Um so okay, so getting back to all of this, uh, so what investigation workflows and guardrails uh have proven effective, especially around validation privilege and documentation that stands up to scrutiny?
Adria PerezYeah, I know we talked about the protocol. I I'll just give another example of a workflow that I think works really well and it also is a control, which is once you start training business personnel on potential deepfakes and you give them examples like the DoorDash example, or we like to show a video that one of our partners created to show that you know anybody can create a video through AI and through public sources of AI, actually. Um, you then are gonna start to get um a lot of business personnel um reaching out to compliance and saying, is this real? Is this real? Is this real? Right. Um, especially, and that and that's great because that's what issue spotting is about is when you're looking at things differently, you're gonna keep, you know, referring back to compliance and saying, hey, can you see if this is real, this is real? Um and so a lot of my clients, what they've done is they have put in an escalation procedure. So when that comes in from a business person saying, look, I received this document from a vendor overseas and it looks a little off to me. Can someone please look at it and see if it's real or if it's generated by AI? Then it will go through a procedure where it will go through compliance to an IT person who will have the expertise to help them sort of figure out, okay, do we think it's real or not? If it's is real, then it will go through the compliance piece and up to legal. If they're not sure, then then I will get contacted and we'll look into a forensic consultant to see, you know, what the issue could be or if it is real or not. But that escalation procedure is really important, right? And it's just an extra check, but through established processes, because I'm sure companies have, you know, they've done this before in other areas where they had these escalation procedures to say, okay, we're gonna have to bring in experts throughout the company to help us figure out what is the issue and what is the next step. And that's really important. And I would say too, Tom, just to kind of throw this out there, because I don't think our clients do it enough, is if you're a public company and you have a board of directors, they're all very interested in AI and AI usage at the company. And some of our um clients have board committees that are focused on technology or emerging technology and the risks. But I would say to compliance departments and legal departments, you should be bragging about your use of AI and and what protocols you have to your board. They they want to hear about it. And it's all it's interesting to me that when I'm reporting to a board committee or a board about an issue, and I bring up, you know, isn't it great that our legal and compliance departments use AI for X, Y, and Z? Sometimes the board members are just surprised. And I'm and I'm surprised they're surprised because that's something that really should be promoted. And legal and compliance, you know, oftentimes people think of it as a as a way to expend resources and spend money, and they're not bringing value to the company. That is not the case. And this is one of those areas where legal and compliance programs can say, you know, we actually are bringing value, and this is how we're bringing it through AI. So I just want to encourage people to do that.
Prompt Craft, Bias, And Gender Nuance
Tom HagyYeah, I think it it's here. Embrace it, you know, and learn how to use it. Learn how to use it ethically and responsibly. And um, yeah, because you, I mean, obviously you can you can use it badly, but you can use, you know, a chainsaw badly. I don't know why I came up with that metaphor, but yeah, I mean you you've got to it's like I said, it's it's here. I've got a a nephew who teaches writing at the University of Wisconsin, and this is a big deal for teachers. Um so how are kids uh and as somebody who's written his whole life, um, and one day I'm gonna get quite good, but he uh you know, he's got an issue where he can kind of spot it. There also, I was working with an SEO company who was advising us on a client on AI, and it said, here's a list of words that AI will always use that you want to avoid. And there are things like if I'll ask it about a court decision, say, give me a give me a summary of a court decision, of this court decision, or this lawsuit, and it always says, in a landmark lawsuit against so-and-so, I'm like, as I've told it, never use landmark. You don't know if it's landmark. Landmark's a very, you know, you gotta know more than I know and more than you know that to make it a landmark. But maybe somebody somewhere they dislike the word landmark, they like the word landscape. Uh, you know, in the uh in the AI landscape is gonna be like, okay, it's there's certain words like that. So I've actually told my AI, stop using landscape and never say landmark. In fact, don't use adjectives, because uh because that's subjective stuff. So uh but it's a but anyway, yeah. Just I but anyway, my uh my nephew, I have a feeling what I think what he said was they're gonna do more in-class writing. You know, get the old blue book and pen out. Um, and even if it's just short, you want to see that somebody can put their thoughts down coherently on paper, but uh otherwise teaching them to use it and uh identifying. But you talked about using forensics uh forensic folks, and it's while you were saying it, I was thinking, why don't you just save some money, you know, and send it to your daughter to see if it's real or not, because uh I think they kids of a certain age. Great point. Yeah. There's a great book called Yeah. There's a book called The Mindset List. It actually is from um a professor at Beloit College, also in Wisconsin. I don't know why I'm giving them a plug, but the um but it's like if you were born in 1968, you've never known you'll never know a world without so-and-so. It's like uh my my kids will for example, they'll never they were born in uh 91 and 93. They'll never know a world without computers, without a mouse. You know, when they're like two years old, they're navigating a mouse. It's like, okay, this thing over here is gonna make that thing over there uh move. So it's a cool list, but kids now are gonna be like, they won't know a world without AI. So they'll always be on the lookout, you know, for fakes.
Adria PerezThat's exactly right. Um, and can I just like double down on what you just talked about about adjectives and the use of landmark and landscape? We see more whistleblower complaints in the last, I don't know, four to five months that are 20 pages. They have a lot of adjectives, like you're mentioning. They have every potential statute known to man, you know, most of which do not apply. And it's really easy to see right away that it's been generated by AI. But the, you know, the issue doesn't stop there because the compliance department, you know, they've seen it, they've sent it to us as outside council. So obviously they want us to see it. Because what's ending up happening with these long whistleblower letters with lots of adjectives is it only it almost makes it harder for the compliance department to understand, well, what do I do next? Like what's really the issue here that's embedded in 20 pages. And so we've and you can't just say to your board or to the DOJ, for example, it was so long, we just decided not to do anything about it, right? And so we have to figure out, you know, what's the core issue and what we should look into because of the what you were just referring to, all the adjectives, all of these, you know, words and phrases that don't really make sense in the context that we're in. And, you know, I have drafts of um have even, you know, pleadings where I've thought, oh, this is definitely generated by AI because half of what they're saying in the legal context doesn't make any sense. And so that's I I commend your cousin for like, let's get back to brass tacks and let's start doing like writing in class. The same thing with law school. It's almost like you've got to have the experience and also the education to know when you're looking at something, well, that doesn't make sense in the legal context. So this must be generated by AI. You you have to have those experiences.
Tom HagyYes, you're absolutely right. You know, I I just I call it you know a smell test because I'll have to I'll look at something and like this all looks good and sounds good, but wait a second, is that really right? And then I'll ask, I'll ask AI that's sorry, good catch. I'm like, okay, wait a minute. You know, thank you for correcting me on that, because that's not actually okay. So yeah, it's it's I don't know. I treat it like a I treat it like a young or inexperienced researcher that's just really, really smart. Um but so finally, let's get down to some some very practical ideas. What what should companies be doing uh right now um to get in line with these uh requirements?
Escalation Workflows And Board Oversight
Adria PerezSo I would say from a compliance program enhancement point of view, I think the government authorities would expect companies to use AI to enhance their policies. You know, I can't tell you, like even two years ago, a company would send me like eight policies and say, you know, can you please help us make these more streamlined or make sure we get rid of inconsistencies, and which always happens, you know, it's it's like sometimes when you get the instructions, like you were saying, to put together furniture, it's like see page five, you go to page five and there's no page five, right? So um, an AI is a huge helpful tool to help you with that for inconsistencies and streamlining policies, enhancing your training to make it more user-friendly and easier. I've seen our clients using AI to help them summarize whistleblower complaints or surveys. A lot of our clients do employee surveys every year, and they want to see like how the responses have changed over time. Are they improving their processes so that they can see that in the responses from the employees? So they will use AI to summarize that. I have one client who's told me that she has a very specific uh question in her survey to the employees about um feeling like you're in a silo or feeling like you have too much pressure or stress because she feels like if you feel like you're in a silo and you have tremendous stress to hit a target or to hit a goal for the year that the company has deemed to happen, you're more likely to break the rules or to circumvent rules to do that. And so she's using AI to kind of see, okay, where are those hot spots to say, okay, I need to do more training in those places with that sales team and that geography or whatever it is to make sure that I'm preventing something that could actually turn out to be something that would cost us millions and millions of dollars to get over. So there's all sorts of different ways to use AI to really help a compliance program. On the internal investigation side, you know, we use it for, you know, I've mentioned chronologies and interview questions and summarizing different um whistleblower complaints so we can kind of see again, where are the trends, um, drafting and editing. Or even the other day, you know, I was given something in Portuguese. And even though I can speak and read Spanish, Portuguese is a little different. So I was reading this audit report in Portuguese, and I had to get an answer to the client within five minutes because of an issue that was going to happen. And, you know, I read it and I thought, okay, these are the paragraphs I really need to understand more fully. And I put it into Harvey and had a quick high-level translation, which was great. But also with document review, Tom, it has helped me so much in so many cases. And I'll give you an example. I had a client who had like a hundred custodians, like a hundred employees who had Teams chats and emails about an issue that was really just permeated the company. And I had to figure out who first learned about this fraud because it just was perpetuated year after year after year. And so we had all the Teams chats and the emails put into a review database. And there was an AI tool associated with that database where I could literally put who was the first one to learn about the fraud, right? And I was more descriptive and detailed than that. But automatically, all the summary comes up and it says, in 2017, Tom realized there was an issue about blah, blah, blah. And then it goes on to summarize, and then it has all these documents that you can click on to see like where it got the information to create the summary. And that was huge because I, yeah, because I just avoided like all this time. And then I said to the um reviewers, okay, I need you to focus on these documents first, and then try to find other documents like it so that we can be more targeted instead of reviewing, you know, doc after doc, team chat after team chat. And and Tom, I know this will surprise you, but a lot of Team Chats embedded in a bunch of Starbucks talk about, oh, did you get your Starbucks stars and did you get your coffee and blah, blah, blah? There's like two lines of actually something important. And I was like, there's just no way we would have been able to get to that important piece as quickly as we did with AI being the tool. And so there's just so many ways to enhance things and make things go faster.
Tom HagyI just like I just think of the stupid things I've said I say to people and they say back to me in Teams chats and stuff, because we're just we're human beings and we're not in the same room, and we just like have funny observations or whatever, and it's just so dumb. But we have, but it's fun, you know, it takes a couple minutes. But if somebody read it back, like, what were you doing? What were you is this how you spend your time? Like, no, uh, but that day I did a little bit.
Adria PerezIt's just like, you know, that happens more often than not. You you just nailed it. It happens more often than not, Tom. Like when we're showing, like in an interview, an employee, teens chat after team chat, or WhatsApp message after WhatsApp message, right? And they're just like so embarrassed. And and I remember one person was just like, please don't look at all that Starbucks talk, or don't look at all this talk about political views. And and we're not there for that, but it is it's amazing to see how people react. Um, and it used to be emails, right? I used to show emails more often than not, but we're we're past that now. As you as we talk about evolving technology, it's all in the WhatsApp and it's all in um Teams chat these days.
Tom HagyYeah. And sometimes I like that with teams will um, because I'll have I'll have meetings or phone calls with people that are about business, but then I'm also friends with them. And so then we'll go off on something, and then it'll read back, they'll give a summary of the call. Tob shared his his bizarre experience on spring break and uh when he was 20 years old, and he ended up in jail. And Sarah responded that I hope he's learned a lot, whatever. It's just so it treated it so formally, you know, like our goofy talk. I'm like, oh, okay, that didn't need to be in there. That's good. Okay. Um, one last thing, you you've mentioned Harvey. I meant to ask you, is that is that is that uh off the shelf or is that proprietary to Reed Smith?
Adria PerezNo, I I think Harvey is used by other companies and other firms. But this one in particular was testimonial to us. So, but I would say, you know, some of my clients use Harvey, um, but others use different tools that have come about that are more tailored towards, you know, in-house counsel and in-house legal departments and compliance departments. I will mention this though, Tom, because I thought this was interesting. One of my clients mentioned to me that they had decided to actually have two different AI platforms at the company. One was for the business at large, and then one was for the legal and compliance department. And the reason why is because they realized they were getting the legal department realized that they were getting whistleblower complaints and other just drafted documents that they could tell that the platform was using their legal memos and their privilege and confidential materials. Right. And they could see that it was learning from that. And the legal department and compliance department decided, okay, we we can't have that. And so that's why they decided to separate it, which is more of a cost, right? More resources have to be dedicated. But that was the first time I had ever heard that um from a client. And I would not be surprised if that keeps happening. But but here at Reed Smith, we use Harvey, we're trained on Harvey. Um, we actually have within the litigation department, we have just started um routine um webinars, but also where we're talking with each other about how we're using AI, what's working, what's not working. Um, the partners were just trained the other day on how to use certain features in Harvey. We get new features from Harvey pretty regularly that we're trained on. I mean, it is very much a focus here. I, you know, I was at a different place um for 20 years, and I've never seen anything like this, like the kind of resources that's that are put into technology, like at Reed Smith. And it's great because sometimes I feel like I'm actually teaching our clients. Like we have a CLE presentation where we talk about AI with compliance and internal investigations. And in the appendix, I have like six or seven prompts that I share with clients that they can literally cut and paste, like prompts that I use. It does say please in it, by the way, every single one. But, you know, I think it's really important that we're also sharing with our clients how we use it and how they can um enhance what they're doing, because it is such a great tool. And like you said, if you don't use it, um, I mean, the technology is gonna just run away from you. You will be in a in a poorer state than those who do.
Tom HagyMy my my use of it is very simple. You know, I'm writing about areas of law. Um and but anything I I have for a client, I have a disclaimer. Um initial drafts and research were done with the aid of you know, Microsoft co-pilot and Adobe uh assistant. Um, but then I say it's been reviewed by a legal editor with 40 years experience. You know, I just put that in there. Clients, I mean, nobody's actually made a big deal out of it, but um, but if you saw the first draft versus the final, you would see you know it's checked. And I do like you mentioned the one case where um the system shows you where it got the information. And that's one thing I on with Copilot. Adobe does it automatically. Now, Adobe's different because I'll give it a 150-page document and I'll ask it, give me the key points, but then it'll show me exactly where in the document it got these key points, and uh so I can go through just like you were saying, and go through and check them. So that's it. So I don't know. I think that's I cut I think I covered everything. Is there anything else you wanted to uh uh go over? I think we nailed it.
Adria PerezThis has been so fun. Yeah, yeah, this has been a lot of fun. I knew it would be, and I knew the time would go by really quickly. Um, the only final thought I would say is although there are ways that AI can be used against companies, there are so many other ways where it can be really beneficial. And I do have a few clients that I are just a little bit timid in sort of dipping their toe into AI because they're afraid of all of the risks, sort of like, you know, the sky could fall down. But I I think just like you and I were talking about, once you get started and your mind starts to shift in in the, you know, sort of the brainwashing of prompts, you start to realize that it can be an amazing tool that can really be efficient and effective. Um, but like you said, Tom, having experience and being able to see what actually is useful with AI is imperative. So I really appreciate you allowing me to talk about it. Would love to talk to you anytime. This was a lot of fun. I knew it would be, and I just appreciate everybody listening.
Tom HagyYeah. Well, thank you, Adria. Thank you very much. We will do it again. You have a good rest of your day.