AI in Healthcare Raises Liability Risks for General Counsels

2 min readSources: Lex Blog

AI adoption in healthcare introduces complex liability risks for General Counsels.

Why it matters: AI-driven healthcare tools elevate liability concerns, impacting compliance strategies for General Counsels as they navigate emerging legal landscapes and regulations.

  • Ongoing litigation examines AI's role in medical malpractice, intensifying liability discussions.
  • Physicians remain liable when using AI, states the Federation of State Medical Boards.
  • Illinois and Texas enforce specific laws addressing AI misuse in healthcare.
  • FDA has yet to approve any generative AI medical devices (Oct 2023).

The integration of artificial intelligence (AI) in healthcare is not only advancing medical capabilities but also transforming the landscape of legal liability—a pressing concern for General Counsels. As AI tools become commonplace, determining accountability in cases of medical error or harm becomes increasingly complex.

This issue is underscored by ongoing malpractice cases exploring the role of AI in patient care decisions. Such cases may redefine how liability is attributed, moving beyond traditional product liability doctrines. LexBlog discusses potential shifts in culpability, suggesting that courts could see AI tools as aids rather than autonomous decision-makers.

Furthermore, the Federation of State Medical Boards maintains that liability remains with practitioners utilizing AI, ensuring the end-users—primarily physicians—are held accountable.

Illinois and Texas have taken legislative steps to address AI's application in medical settings. Illinois bans non-licensed AI from making therapeutic decisions, while Texas mandates patient disclosure about AI use, reflecting a patchwork of laws that General Counsels must navigate.

Federal regulations lag, as illustrated by a GAO report noting the FDA's absence of approval for generative AI medical devices as of October 2023. Meanwhile, the European Union's comprehensive regulations through their AI Act and Product Liability Directive could inspire similar frameworks in the U.S.

By the numbers:

  • 0 FDA approvals for generative AI medical devices as of October 2023—highlighting regulatory delays.
  • 2 states, Illinois and Texas, have enacted specific AI healthcare regulations—exemplifying proactive legislative measures.

Yes, but: Although AI promises to enhance healthcare outcomes, it also introduces unprecedented legal complexities.

What's next: Regulators and legal professionals are closely monitoring evolving AI litigation outcomes, which may redefine liability paradigms in healthcare.