Home

[lecture, January 8, 2026] Optimal Liability Design for Medical AI

time: 2026-01-06

Speaker: Tingliang Huang, Professor at the University of Tennessee, Knoxville, United States

Time: 9:30 a.m., January 8, 2026

Venue:Room 109, Building 12, Wushan Campus

Biography

Tingliang Huang is a tenured Full Professor in the Haslam College of Business at the University of Tennessee, Knoxville. His research interests include business analytics, AI, data science, new business models, operations-marketing interface, supply chain management, service operation, innovation and socially responsible operations. He has published nearly 20 research articles in top business journals, such as Manufacturing & Service Operations Management, Marketing Science, Management Science, and Production & Operations Management. Professor Huang serves as an ssociate editor for Manufacturing & Service Operations Management, Service Science, Decision Sciences, Naval Research Logistics, and IISE Transactions, and a senior editor for Production & Operations Management. He has won many research awards, including the 2025 Vallett Family Outstanding Researcher Award, the 2023 INFORMS Workshop on Data Science Best Paper Award, the 2018 POMS Wickham Skinner Early Career Research Accomplishments Award, the 2018 Most Influential Paper Award in Service Operations, the 2015 Wickham Skinner Best Paper Award and others.

Abstract

Artificial intelligence (AI) is increasingly integrated into medical decision-making, yet its liability implications remain complex, particularly when physicians differ in diagnostic skills and their quality is unobservable. This paper develops a principal-agent model in which a social planner designs medical liability to regulate a physician with private quality information who chooses between a standard treatment, a personalized judgment-based treatment, or following an imperfect AI recommendation. Our analysis yields several novel insights. First, we show that the optimal mechanism under asymmetric information is surprisingly simple: a uniform, one-size-fits-all liability level for all physician types who deviate from the standard of care. Despite physician heterogeneity, this simple policy often achieves the full-information first-best outcome, particularly when standard care is reliable or AI is highly accurate. Second, the relationship between AI accuracy and optimal liability is non-monotonic. Contrary to common intuition, better AI does not always imply more relaxed liability. As AI accuracy increases, the optimal liability either decreases monotonically or follows an inverted-U pattern, depending on the uncertainty of the standard treatment. Third, asymmetric information does not universally reduce social welfare. Welfare loss arises only when standard care is unreliable and AI accuracy is too low; even then, its magnitude follows an inverted U-shape, initially increasing as AI complicates the regulatory problem, but declining as more accurate AI helps mitigate it. Finally, we find that information asymmetry is a double-edged sword in the presence of AI, and greater transparency does not benefit all stakeholders equally.