Will AI reduce or increase inequities in cancer care?

ESMO
  • Dario Trapani
Cancer Control Principles
Unsustainable

Dario Trapani

Go to the Column

While the use of artificial intelligence systems is expanding in oncology, ethical issues including biases introduced in care delivery and uneven access to technology raise concerns about whether some patients may be left behind

In modern oncology, the ancient impulse of humans seeking counsel from enigmatic and seemingly mystical forces finds a new avatar in artificial intelligence (AI). Much like the ancient Sybil, the Delphi’s oracle, AI confronts oncology professionals with guidance based on not-yet understood mechanisms, often delivered in a ‘black box’ (J Med Ethics. 2021 Jul 21:medethics-2021-107529). However, while oracles’ statements were grounded in sacral authority, AI comes with a subtler peril: the potential for inconsistency and inequity within systems that are shaped inherently by human biases and limitations.

AI-based systems, now embedded in imaging, pathology, prognostic modelling, treatment selection, hold promise to boost human performance in cancer research and clinical practice. However, as the use of these tools proliferates, the risk of exacerbating existing inequalities in the field rises. Access to digital health and AI-powered oncology is uneven: regulatory approval processes are fragmented globally, and disparities in infrastructure and reimbursement are common across countries, even within the European Union (EU). When unevenly distributed, AI could widen rather than close cancer survival gaps as unequal access to timely and effective diagnostics and therapies typically translates into disparities in patient outcomes.

Despite numerous efforts in recent years to integrate digital health, tools underperform for diverse populations and often embed systemic inequities into the delivery of care. One example is the pulse oximeter: studies showed that the device systematically overestimated oxygen saturation in Asian, Black, and Hispanic patients, leading to missed cases of occult hypoxemia, and delays in COVID-19 treatment eligibility as per established treatment guidelines (JAMA Intern Med. 2022 Jul 1;182(7):699-700).

Large language models (LLMs), now being piloted in healthcare and occasionally linked to electronic health records, have exhibited similar issues. A 2024 study tested four major LLMs in settings involving race-based medical stereotypes: all models produced harmful, debunked race-based answers in some responses, and did so inconsistently across repeated queries (Science. 2019 Oct 25;366(6464):447-453). Another interesting case comes from a 2019 analysis of a commercial health risk algorithm used in care management: the system predicted patients' future care needs based on healthcare costs; however, since historically less had been spent on Black patients for the same conditions, the algorithm underestimated their illness severity (NPJ Digit Med. 2023 Oct 20;6(1):195). According to the study authors, correcting the bias would have more than doubled the percentage of Black patients eligible for additional care, from 17.7% to 46.5%. These examples show how digital tools can perpetrate or exacerbate inequities when built on biased data or flawed proxies.

It is within this landscape that the recently approved EU AI Act may help bring some order. By introducing a risk-based regulatory framework for AI, it places strict requirements on high-risk systems such as those used in healthcare, including oncology, with mandates for transparency, human oversight, and post-market monitoring. However, ambiguities remain about explainability, liability, and equity, and the use of general-purpose AI models that, if trained on biased data, could exacerbate disparities in care for marginalised populations.

An ethical frontier of AI-driven oncology lies on our horizon as oncology professionals, where the key question is not whether these tools can perform accurately, but whom they serve and who, conversely, is left behind. The true risk of AI does not stem from its analytical precision, but its potential to consolidate existing disparities while appearing technologically neutral.

As oncology care providers, we must resist both technological determinism and reflexive conservatism. The challenge is to shape an AI-integrated oncology that enhances clinical judgment, reduces disparities, and preserves patient dignity. This requires advocating for equitable access to AI as a matter of clinical ethics, actively participating in regulatory consultations, and investing in education of the oncology workforce. Most importantly, it demands that we remain vigilant against delegating decisions that we are not ready to take responsibility for. In medicine, as at Delphi, it is not enough to heed the oracle, but one must decide how to act upon its answer and ensure that all have equal opportunity to hear it.

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings
  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and you can only disable them by changing your browser preferences.