Artificial intelligence plays an increasingly weighty role in medicine. We see it in areas ranging from chatbots connecting patients with providers to the performance of robotic surgeries. Indeed, AI is a valuable component in making improvements to patient care.
That said, AI is not perfect. Situations can and do arise where doctors do not reach the same conclusions as technology or algorithms.
When might they disagree?
Various studies show that AI can outperform doctors when making diagnoses and developing treatment plans. For instance, the Harvard Business Review found in 2019 that AI did a better job of diagnosing patients and delivering cost-effective care than doctors.
However, medicine is a highly complicated industry, and technology and humans do not always reach the same conclusions.
For instance, algorithms can be faulty or inconclusive; systems may not consider nuance or patient details that influence diagnosis and treatment. Under these circumstances, decisions – and possibility liability – can fall solely on doctors’ shoulders.
What happens when a conflict arises?
In general, physicians do not find it easy to dispute AI conclusions or reject recommendations. As this article notes, doing so can leave practitioners vulnerable to conflicts, including medical malpractice claims.
However, doctors should have the support and resources to make decisions in the best interests of their patients, even if they conflict with technology. Some strategies include:
- Involving patients by explaining clinical ambiguity so they can make informed decisions and participate in their care
- Seeking multiple algorithms or technologies to reach a consensus opinion
- Preparing a rationale to justify the rejection of AI’s conclusions
- Getting a second opinion from another practitioner
These approaches can allow doctors to provide the care they believe is correct, even if AI says something different.
For the most part, AI can improve and augment the invaluable services doctors and nurses provide. However, neither human nor technology is perfect, and there can be delays or decisions that ultimately compromise care.
Under these circumstances, determining which element is liable and where a mistake may have occurred will be vital in improving systems and identifying which party is responsible.