Hearing the patient’s voice in AI-enhanced healthcare

  1. Kate Womersley, research fellow, core trainee in psychiatry12,
  2. KWM (Bill Fulford, emeritus professor of philosophy and mental health3,
  3. Ed Peile, professor emeritus, medical education)3,
  4. Philipp Koralus, Fulford Clarendon, professor of philosophy and cognitive science4,
  5. Ashok Handa, professor of vascular surgery, honorary consultant vascular surgeon56

  1. 1The George Institute for Global Health at Imperial College London

  2. 2NHS Lothian, Scotland

  3. 3University of Warwick

  4. 4Institute for Ethics in AI, University of Oxford

  5. 5University of Oxford

  6. 6John Radcliffe Hospital

Behind every data scientist and entrepreneur celebrating the powers and potential of artificial intelligence (AI) to enhance modern healthcare, there is a silent majority who are more circumspect. Of course, big data and disruptive technologies in healthcare are not new. Medical providers and the public have grown familiar, and even comfortable, with computer-aided triage when calling NHS 111; electronic health records; and robotic surgery and scans interpreted at first pass by an algorithm. Big data and AI assistance are needed to meet the UK’s health demands and ambitions, and the government is investing in this future. Trusts can now bid for a share of £21m from the Department of Health and Social Care (DHSC) to accelerate rollout of promising AI tools to mark the NHS’ 75th birthday.1 “NHS data is a phenomenal resource that can revolutionise healthcare, research and the life sciences,” writes Ben Goldacre in The Goldacre Review, commissioned by the DHSC in 2021. But he continues: “data alone is not enough.”2

AI offers inferences and indicates probabilities by applying complex algorithms to reams of personal, public, and government data to execute tasks previously beyond human capability. In particular, using extensive computational resources, AI can learn to make inferences about individual cases based on patterns in these data. However, neither these computational resources nor vast amounts of data guarantee that AI outputs will take into account key values: the values that we hold about what is right and wrong for us as individuals, as clinicians and as patients, in the ways we practise and the ways we are cared for.

Anxiety accompanies the projected shift in decision-making power away from people and towards AI. Worries arise about the slip of our success criteria towards what is easy for existing AI technology to deliver, rather than what reflects the best practices we want to see in medicine and elsewhere. Discussion around the risks of AI tends to focus on safety, data security, and discrimination in machine-based decisions. Ensuring privacy and accuracy can be summed up as the “right to a well-calibrated machine decision.”3 This requires transparent programming, scrutiny, and regulation to remove baked-in biases. But that is only part of the problem. Healthcare professionals fear losing influence and authority in clinical settings of the future, and there is also a real threat to the patient’s autonomy. While appropriately trained AI can outperform clinicians in a range of diagnostic tasks that will continue to increase, AI technologies that are best placed to harness the power of large datasets appear as a black box, without transparent decision-making criteria. This threatens to render impossible critical engagement for clinicians and patients with AI recommendations, which undermines professional and public trust. Don’t patients have “the right to a human decision,”4 a human opinion, or at least a human discussion?

When visiting a GP or attending hospital patients want to be treated safely and effectively, but also importantly, to feel heard. Many express a loss of human agency at the idea that technology and AI may be over-involved in their care, vulnerable to the possibility of a non-human and possibly inhumane process deciding what happens next. This imposition of technology can feel unfair and devoid of empathy for our preferences.5 Despite the fact that many algorithms may be less prone to error than any single human adjudicator, when unchecked there is a—justified or otherwise— suspicion that machines will be unfair to us, opaque, and incompatible with our values.

These concerns are understandable. Yet, we believe AI has the potential to strengthen the voices of patients in healthcare decision-making. AI relies on predictions based on historical data, but doctors should be particularly interested in patients who do not fit these generalisations. One way in which individuals are unpredictable is in their priorities and values, often crucial aspects of their individuality which they most want acknowledged and heard in decisions about their care. For example, if a patient is referred to an orthopaedic surgeon for knee pain, and is deemed a suitable candidate for a total knee replacement, is consented, the operation booked, and reassured that she will be pain free in 18 months’ time, what could be the problem? On the way out of the clinic, the patient mentions her passion for gardening. The surgeon probes further. He knows her range of movement will not improve with a prosthetic knee, and in fact may worsen. When faced with the option of living with the pain, but continuing gardening, or being pain free, but not readily able to kneel, this patient chooses to avoid the operation and manage her pain with physiotherapy and NSAIDs. This may be the best outcome for this individual, which becomes apparent by asking what matters most to her about her quality of life.6 Incorporating patient values saves a rationed health system money by avoiding expensive procedures doomed to fail by not meeting patient expectations.

Values-based practice (VBP) is an approach to working with complex and conflicting values in healthcare that focuses on what matters to an individual, both the patient and the clinician, as the basis of shared decision-making.7 This is not in antagonism to evidence-based medicine (EBM), but rather a partner to it, linking science with people.8 Exploring values with a patient in the light of evidence could, we believe, be expertly curated by AI to support shared clinical decision-making based on combining evidence and clinical experience with individual patients’ values.

When clinicians consider VBP in today’s time-poor, resource-scarce NHS, it’s common to feel that the consultation is already just too compressed for shared decision-making. This is bad for patients and for clinicians. For patients it means not feeling heard. For clinicians it means a mismatch between their own standards, hopes and aims in the consultation and what they can actually deliver, with a resulting loss of wellbeing and burnout. AI’s promise of rapid problem solving risks making a slow, exploratory, and possibly faulting discussion about values seem even more extraneous to the consultation. This is a false economy. As the UK Supreme Court case of Montgomery v Lanarkshire Health Board (2015) and its verdict remind us, clinicians have a legal as well as an ethical responsibility to work in dialogue with patients so that they sufficiently understand the risks and benefits of available options in the context of the patient’s own values, whether around indications for a caesarean section or choosing which antidepressant with each’s different side effects might best suit.9

Predictive AI risks embodying a pattern of bad practice which overlooks values; but suitably adapted AI, could support and expand clinicians’ use of shared decision-making in the consultation. This perspective favours AI systems that are a hybrid between machine learning, and other approaches based on questions and reasons.10 Indeed, similar considerations may apply not only to the deployment of AI in medicine, but also in other areas that distinctively implicate the perspective of individuals. Developing best practices for AI in medicine that respect human values could yield a beacon for best practices in the deployment of AI in general, as this new technology infuses virtually all areas of human concern in the coming years.

With renewed interest in how to teach medical students about managing uncertainty, clinical practice becomes far richer and more interesting, as well as more effective, when we take the outliers seriously and ask, “What matters to you?” Scaling up AI badly would be the worst version of public health in the UK: treating the population as a uniform mass, or a set of sub-groups which behave with clockwork reliability. This would do untold damage to patient trust while risking patients’ lives. Rather, AI should be optimised as an assistant to clinicians, while the patient remains respected as the expert on themselves. The clinician’s central role is as an interface between AI and the patient to navigate through uncertainty, which will require significant upskilling in VBP to be done well. The dialogue that results would support clinicians in fulfilling their aims in the consultation while ensuring that every patient’s voice is truly heard.

Footnotes

  • Competing interests: KW receives research funding from the Wellcome Trust; KWMF is fellow of St Catherine’s College and member of the Philosophy Faculty, University of Oxford; emeritus professor of Philosophy and Mental Health, University of Warwick; founder director of the Collaborating Centre for Values-based Practice in Health and Social Care, St Catherine’s College, Oxford; and founder editor, Philosophy, Psychiatry, and Psychology. PK is a fellow of St. Catherine’s College, Oxford. AH is the clinical tutor in surgery, University of Oxford: a fellow of St Catherine’s College, Oxford.

References

  1. Fulford KWM, et al. Essential Values-Based Practice: Clinical Stories Linking Science with People (2012) Cambridge University Press. Second Edition forthcoming in 2025.

Source link

  • Share this post

Leave a Comment