Emerging Litigation Podcast

Agentic AI on Trial: You Be The Judge Part 2 - Smart City Traffic Control

Tom Hagy Season 1 Episode 116

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 22:23

A smart city traffic system powered by agentic AI promises efficiency—but what happens when it fails under pressure?

This is the second episode in a three-part series exploring real-world legal and governance challenges surrounding agentic AI.

In this episode, we examine a scenario where an autonomous system managing traffic signals, routing, and emergency coordination collapses during a perfect storm: a major event, road closures, and severe weather. The result—gridlock, crashes, delayed emergency response, and a life-threatening failure.

Featuring:

▪️ Galina Datskovsky, PhD, CRM, FAI — Board of Directors, FIT and OpenAxes; Information Governance and AI expert
▪️ Marina Kaganovich — AMERS Financial Services Executive Trust Lead, Office of the CISO, Google Cloud
▪️ Hon. Lisa Walsh — Florida Circuit Judge, 11th Judicial Circuit, Miami-Dade County

We explore what should have been built before deployment, including human oversight, escalation protocols, and safeguards that prioritize safety over optimization.

We also discuss:

▪️ Why AI systems can’t be trained for every edge case
▪️ The importance of validation, monitoring, and auditability
▪️ Legal liability and shared responsibility across cities, developers, and users
▪️ How sovereign immunity shapes public sector accountability

If you’re thinking about AI governance, liability, or public infrastructure risk, this episode offers a practical framework for evaluating responsibility before failures occur.

🙏 Special thanks to Galina Datskovsky, Marina Kaganovich, and Judge Lisa Walsh for sharing their insights and expertise, and to Kathryn M. Rattigan, Partner, Data Privacy + Cybersecurity with Robinson+Cole for bringing this team to the Emerging Litigation Podcast. 

______________________________________

Thanks for listening! 

If you like what you hear please give us a rating. You'd be amazed at how much that helps. 

If you have questions for Tom or would like to participate, you can reach him at Editor@LitigationConferences.com.

Ask him about creating this kind of content for your firm -- podcasts, webinars, blogs, articles, papers, and more. 

Why Cities Turn To Traffic AI

Tom Hagy

Welcome to the Emerging Litigation Podcast. I'm your host, Tom Hagee. This is uh part two of our series of You Be the Judge, Agentic AI on Trial, AI Liability and Risks. In this episode, we're going to talk about traffic. AI is transforming urban traffic management worldwide from Los Angeles to London. If you've driven in either of those towns, you know they could probably use it. Adaptive signal systems powered by AI can cut congestion, emissions, and travel times. You know, you must, like me, you often think when you're sitting at a traffic signal, a red light, and there are no cars coming in any other direction, you're sitting there, you know, contributing to pollution, and you're also wasting your time. But it's a good time to check your phone. No. So you know that that leads to emissions that aren't necessary. In Pittsburgh, Pennsylvania, where I grew up near there, uh its system reduced wait times. It's called uh their CERTAC system. It reduced wait times by forty percent. Who couldn't use that? And emissions by 21%. You know, that was a city that in the 60s the lights would go out and lights would come on. Uh the street lights would come on in the daytime. Those were smart streetlights, but there was so much pollution that it simulated nighttime and tricked those streetlights. They would come on. Okay. That's not the case anymore. Pittsburgh is beautiful. Uh predictive modeling now forecasts congestion. Uh I'm not talking about I'm talking about traffic congestion. It now forecasts that with 85% accuracy. And integration uh with connected vehicles enables rerouting of those vehicles. But what happens when this uh when this autonomy fails? When it uh it just suddenly doesn't work and there is congestion, and people are stopped unnecessarily. You're gonna see how that can play out in uh in the scenario that our panel will provide and discuss. They're going to talk about who bears responsibility. Is it the city, is it the vendor, the policymaker, somebody else? So they're gonna examine foreseeability, covered immunity, and auditability in public AI deployments. Again, this panel is Galina Doskowski, PhD. She's a business strategy advisor. Marina Kaganovich, she's an attorney and compliance advisor at Google. You've heard of them. And an actual judge, Judge Lisa Walsh of Florida's 11th Judicial Circuit. Also, I want to thank Catherine Radigan, who is a partner with Robinson ⁇ Cole in Providence, Rhode Island. Uh she pulled this all together. I've had the pleasure of working with her before. She's always working on some aspect of the law as it relates to emerging technologies. She's also going to provide a more in-depth introduction of our panel. And as always, the opinions expressed here are those of the individual panelists. They are not those of the organizations they represent. And with that, here's episode two of Agentic AI on Trial, AI liability and legal risks. I hope you enjoy it.

Kathryn Rattigan

We have three leaders in legal compliance and technology. Um we first have Galena Datskowski. She's an internationally recognized authority in compliance, information governance, AI, data analytics. She has a PhD in computer science and really deep expertise in AI. Um, she advises a lot of organizations on business strategy. She's on a lot of boards. She's just really interesting to hear her background and has a lot of great um knowledge to share with this community. Uh she's she's joined by Marina Kaganovitz, who's an attorney and compliance advisor at Google. She also specializes in AI governance, cybersecurity, risk management, and data privacy. She works with a lot of executive leadership teams to talk about how to secure cloud migration in a compliant way. This is an evolving regulatory environment. She's really knowledgeable in that. She does global cross-functional programs for a lot of different organizations as an advisor. She's part of the ARMA International Board. She's really a thought leader in this space as well. And then we're also joined by Judge Lisa Walsh, who is a circuit court judge in the 11th Judicial Circuit of Miami-Dade County, Florida, where she's an administrative judge of the appellate division, and she participates in some international arbitration division as well. She uh, I think an interesting fact is she's done over a hundred jury trials, both criminal and civil. She's has a lot of appellate decisions out there, um, lots of experience, and she kind of brings her judicial experience and insight into this discussion. So they're really a great group, um, and they represent a pretty big uh it's kind of interesting because they converge on law, technology, and governance and justice. Really, all their perspectives are um really interesting to put together in this you be the judge uh podcast.

Galina Datskovsky

Okay, so let's look at a different scenario in this podcast. We've started with our You Be the Judge podcast on uh agentic AI. And in this scenario, we want to take an entirely different tack and get away from our medical issues and look at the use of agency in other areas. So in this scenario, we have a major metropolitan area. The major metropolitan area implements an agentic AI system designed to autonomously manage all aspects of urban traffic flow. What does that mean? That means optimizing traffic light timings, dynamically rerouting uh public transport, advising drivers via navigation apps, and even coordinating emergency vehicle movements. So all of this is now happening through agency. The AI is constantly learning from real-time data to minimize congestion and to minimize commute times. However, we have an unforeseen series of unfortunate events that happens to occur simultaneously. So we have a large public scale, large-scale public event, multiple unexpected road closures, and a severe weather anomaly all coming together to really make the life of the AI agents quite difficult. The AI autonomously adaptive logic, in its attempt to over-optimize for local conditions without a global systemic understanding, creates lots of bottlenecks and directs traffic into increasingly congested areas. Its independent decisions lead to a large-scale gridlock that lasts multiple hours, causing significant delays in emergency response and an increase in minor traffic accidents. So we have quite a mess on our hands, obviously, in this particular city. One of the consequences of this mess is that an ambulance cannot get to the hospital for hours and, in fact, the patient who is being taken, the patient does not survive the ride. So we have some accidents, we have some severe outcomes, we have a death. Since autonomous and human-driven vehicles do share the road, given the number of convergent events, the system can't cope with this predictable and unpredictable decision making. The unpredictable decision making, of course, being done by the humans. And further, do in due to interconnectivity, not only were the operations of the smart city impacted, but so was navigational guidance, which also tries to optimize for conditions and starts uh redirecting people. So now we are in a real mess and a jam, and that's the scenario. And once we have an absolute mess and everybody's nerves afraid, let me ask our judge. So we have this mess, we have all of these consequences, and what are the potential causes of the problem? Is it the training data? Back to that training data idea that we had in our first example with the medical system. Is it that the system is trained on typical data, not extreme scenarios? Other suggestions that you might have, what is the impact? And further, what should the agent do in such a case? Uh that the AI should have alerted the police, followed some kind of escalation, you know, closed all roads until further notice, like we did on 9-11. What would have been a reasonable way to deal with the problem? And of course we'll get to liability soon, but let's do that first.

Failure Modes And Safety Valves

Judge Walsh

All of this is real. We invented these scenarios and then keep coming across stories in the news of this type of bad outcome actually happening out there. So the issue is the adaptability of the particular system to a series of unfortunate events. All the training that you can input in, all the data you can input in, conditions on the ground, if the aliens attack, you know, unexpectedly, they land in the middle of your metropolitan area highway, and that's never happened before. The system may be completely bamboozled on on how to, you know, how to react. It's again, it's it has the agentic capability, but it is not, it's not a human brain. It's not um the team, you know, it's it's uh, you know, this that that's necessarily the problem. So as far as what the potential causes of the problem can be, uh training data is the system not trained on how to how to manage events when all roads are gridlocked. So that's kind of critical. You can have, and we've all experienced it, we all live in major or metropolitan areas, I think. Uh, you know, the president is coming to town at the same time that there's closed bridges and there's a major accident on another artery. Everything happens at once and there's unexpected gridlock. And the solution to it is usually human interaction that the police are called back, called out to a certain area to direct traffic. But if everything is automated and you don't have the humans on the ground that that are called in to redirect traffic and it's it's rendered to be impossible, where will the accusations fly? So the first question would be if the developer is being called out to be the cause of the problem and they're the ones that are finding themselves in court, the creator, uh, you know, how was the system created? Was it just unifying, you know, Waze and Google Maps and the Maps app and uh satellite data and all of that being poured in with certain prompts, or is it is it being trained on scenarios? Was there a robust training function for various scenarios that allowed the system to pivot and decide how it was that they were going it was going to direct traffic to proceed to lighter arteries or call in for help? So if the system were trained on quote typical data, not an extreme scenario, and that's all it was trained to handle, that's when you you end up with, you know, perhaps a a Waymo that just circles LAX for five hours and never lets its its uh its passenger out, which is something that um you know happened recently, or the inability to handle, to have an alternative that it directs all of the traffic lights and all of the aspects of the urban system in question to react and handle. So as far as the question as to what the agent um should have done, some of the things that courts might look at or that uh, you know, a plaintiff may argue is um that there should be a safe, a safety valve in the system for calling in the police to direct traffic, um or or some kind of an escalation procedure that interrupts the system to call in human interaction or some other fail-safe uh to handle things when when the gridlock reaches uh a certain area, the delays reach a certain area, um, you know, situations like that, or or there's a way to close roads for emergency traffic, for example, as a as a fail-safe to ensure that emergency vehicles can get through. Marina, do you have any any ideas on what uh what some of the other potential causes could be for a system to overload in that way?

Marina Kaganovich

Yeah, I mean, I think I think you're right. Like certainly looking at the way that it was trained would be really critical. And then, as you mentioned, having some sort of escalation protocols. And we often talk about just in general, and when we talk about governing AI and now with agenda as well, is looking at where are those choke points where it makes sense to have a human in the loop. And so in this scenario, in the scenario that we had devised, there there is no human in the loop, it looks like. Like the navigation and the traffic management is exclusively left to the AI agent's domain. And in our scenario, there's some dire consequences. But I think you're you're right in the sense that certain human and the loop aspects can be programmed in, like if certain conditions are met, then escalate. But there should also be this concept of even without that, even without certain conditions being met, it's just that there's sufficient uncertainty and sufficient outcomes that are deemed to be negative, in those scenarios, there should be some sort of escalation protocol as well. And I think that that's something that should all should primarily likely be with the deployer of the system in defining what that looks like and then determining whether the developer of the system that they're getting has arranged for those types of controls to be implemented. So either by default or ones where they can be flexible in adopting this AI for use and making sure that there's enough flexibility to have a way of identifying the types of scenarios where a human would be needed. And then if a human is needed, who that would be. I mean, you mentioned the police, but it might be other forms of emergency personnel, for instance.

Judge Walsh

So what concerns me is if a system like this is rolled out and this also is not, it's not fictional anymore, it becomes an opportunity for, for example, a municipality to cut costs and you know, and eliminate certain positions and certain jobs in urban planning or traffic management, for example. So the only way to ensure that there can be quick response if there's human interaction is incorporating that into um rolling, rolling the product out. If this is going to happen, this is who we call. So that there needs to be an actual person there. Uh, there needs to be an actual department there. If that's the failsafe, if there's no autonomous fail safe that can be implemented, you know, autonomously that doesn't require human interaction. For example, maybe there could be yet another agent that is the urban planning agent. Um, and they are the ones that exist, that separate thing exists to call the right person, to bring in the police, to bring in the traffic management, um, uh, you know, and and do so quickly and and efficiently when prompted.

Marina Kaganovich

Like assume that they implemented an AI agent and you know, the vast, vast majority of the time it'll work well, right? There are obviously certain edge cases where escalation might be needed. But if, as Galina mentioned, we have some dire consequences here. If a suit is brought, what um what can the agent, what can the municipality argue? And to what extent do you think responsibility would have been assessed against it?

Judge Walsh

The the the unique aspect of municipal liability is that the the rule rather than the exception is something called sovereign immunity. And for, you know, for decisions that are made structurally uh in a city, city planning, this is how we run our lights, this is how we do this or that, the city or the municipality is likely to argue that it is immune uh from from it from any lawsuit. When the city makes an individual decision, an individual case, like not cutting a particular tree back that it interrupted a line and caused a death, for example, then the city or its proxy, whatever that might be, may be liable. But that that's always the, you know, the theme, the thread that runs through governments adopting autonomous systems or anything, frankly, is that it is it is more likely than not to argue that it is immune from any kind of lawsuit. It's an executive function, it's made, it affects many people, it is not with respect to an individual. So, like traditional typical negligence, because this is how we hang our traffic lights or this is how we pave our roads in general, it always has that argument to make.

Training Limits And Developer Duties

Shared Fault Across Drivers And AVs

Galina Datskovsky

Yeah, I wanted to come back, if you don't mind, and just to comment on a couple of things you said about training data. It's almost impossible to train for every kind of scenario because there wouldn't be training data. A good example is 9-11 and grounding every aircraft, right? There was no such precedent before. So somebody made that decision. Now, it is not unprecedented for agentic AI to be at the point of taking all the data and making a bold decision, because that, right, that's the whole creativity of current AI. On the other hand, would we want it to make it completely autonomously or to your point with an escalation for approval? And by the way, in that point, I would say it is the developer's job to decide what constitutes a bold decision that is not just derived from the data in a regular fashion and where to escalate. I think that's important. And I think in this case, the developer has a lot of responsibility on how they uh employ and program that. And of course, you're right, the municipality should check if that's available and make the right use of it. And certainly multiple agents talking to multiple agents is fine, a good idea. I'm wondering though, do you think in any way either the drivers who are involved in all of these incidents, because people do stupid things in bad in bad situations, and or potentially makers of autonomous vehicles that are also making decisions on the fly. They're basically agents in their own right. Do you think liability could be extended to a variety of sources at that point? Because really the conglomeration of all of those is the cause of the problem, not just entirely the traffic system in its own right.

What A Lawsuit Would Examine

Judge Walsh

It is likely to, I mean, different states deal with shared liability different ways, but more often than not, liability is shared among, you know, various potential tort feasers. And then it will be ultimately if this thing, if there's a case, if there's a lawsuit that's filed, then it goes to a jury. The jury decides by what percentage. So that's very common. So it isn't one should not look at who is liable as an either-or situation. Often it's both. So where would liability for an individual driver? An individual driver is always subject to a duty of reasonable care. So if the roads are blocked and the driver decides that they're going to drive up the shoulder to try to, you know, get around the traffic and then an emergency vehicle can't get through or it c they cause an accident, of course, then they are liable. If they're stuck in the scrum, you know, gridlock traffic, probably not. An autonomous vehicle, if the autonomous vehicles just start circling a block aimlessly and mindlessly, and it makes no sense because uh the system which controls them is confused. If they're part of, say, one global taxi autonomous system, or um then yes, the the developer may have a problem there because their system went haywire as soon as things looked strange, or they didn't uh or their autonomous vehicles did not recognize what was happening and they just crashed into a bunch of cars. So you know, it I hate to say it depends, but it always depends on who is the person who's in charge of the decision making, what went into the decision that they made? How did the end result relate to the decision that they made? Uh, is this something that could have been preventable? Do they have a duty? Who do they have a duty to? Do they have a duty to the ultimate person who's stuck in traffic? Do they have a duty to the person that is riding in the emergency vehicle? Um, and you you kind of work your decision tree out from there. Um, as far as the system developer, that there there's there's obviously going to be scrutiny on the system developer if the problem seems to bottom on causing all of the lights to turn red at the same time, causing, you know, certain streets to be automatically closed incorrectly. Then, you know, why did the system do that? And and what decision? Making in the programming of this or the training models or the data that was fed into the system. What effect did any of that have on the ultimate actions by the system in reacting to the unforeseen circumstances on the ground?