Harnessing Large Language Models in oncology: ESMO’s framework for integration in the clinic

Harnessing Large Language Models in oncology: ESMO’s framework for integration in the clinic

The ELCAP statements, developed by a group of international experts, help oncology stakeholders recognise the opportunities and the risks of these artificial intelligence systems by categorising them into three types

The opportunities and risks intrinsic to a technology vary depending on its end user, and large language models (LLMs) are not an exception. By manipulating natural language in a human-like manner, these artificial intelligence (AI) systems can, for example, support medical oncologists in clinical decision-making and reduce administrative burden. At the same time, they can introduce biases or oversights in cancer care or deliver self-management advice to patients at the cost of less tailored and accurate information. As the use of LLMs is rapidly expanding in oncology, ESMO has acknowledged the urgent need to provide structured guidance across all groups of stakeholders to advocate for an informed, safe and effective implementation of these tools into clinical practice.

In the “ESMO Guidance on the Use of Large Language Models in Clinical Practice” (ELCAP), developed by a group of 20 experts, including members of the ESMO Real World Data & Digital Health Task Force, and published in Annals of Oncology (Articles in Press October 18, 2025), three different types of LLM applications in oncology have been described: clinical-facing LLMs, patient-facing LLMs, and AI systems in the background of health systems.

Each comes with specific potential benefits and challenges, as one of the paper’s co-authors, Prof. Miriam Koopman from the University Medical Center Utrecht, Netherlands, highlights.

Why is it important to acknowledge differences between LLMs systems for their proper use and implementation in the clinical setting?

LLM technologies have broad applications and are already integrated into cancer care. However, greater awareness is needed regarding which solutions these systems can or cannot provide to address specific needs. The ELCAP statements offer practical guidance for oncologists, patients, and technology developers, among others, on how to evaluate and implement these systems in oncology. The identification of three distinct types of LLMs helps recognise where AI can effectively offer support, and where its use may involve significant risks. For example, ELCAP Type 1 and Type 2 LLM systems – encompassing patient-facing applications and tools directly used by oncology professionals, respectively – are characterised by direct interaction, real-time conversation, and personalised support. End users must acknowledge that reliability and accuracy of LLMs outputs depend on input data that need to be complete and correct. For example, missing data in an electronic healthcare record can lead to AI-generated hallucinations, potentially introducing incorrect clinical details that may affect clinical decisions. Another risk arises when patients use LLMs to seek personalised advice but submit questions containing only partial or unclear information. In such cases, the system may provide misleading or inappropriate responses.

The key objectives of ELCAP are to provide practical guidance for all oncology stakeholders on how to evaluate and use LLMs in oncology, including patients. How can they be guided toward an appropriate use of LLM systems?

This is really a relevant question, for which there is not a clear answer yet. We know that patients are generally open to the integration of LLMs in their care pathways, and many are already using chatbots when seeking information about their disease or to get support. However, we also know that there are significant gaps in patient’s digital literacy, and while some are well-informed and highly educated in the subject of LLMs, others are unaware of the possibilities the technology may offer.

So, we need first to enhance education opportunities for healthcare professionals by publishing manuscripts, providing recommendations and organising events like the ESMO AI & Digital Oncology Congress, aimed at provide an opportunity for scientific, methodological, regulatory, policy and industry experts to connect and interact, and to involve general cancer healthcare practitioners in discussions about AI and digital health. At the same time, it is essential to reinforce collaboration with patient organisations to empower more patients in the safe use of AI and to encourage open consultation with their clinicians about which tools are reliable.

Of course, at an individual level, every oncology professional plays an active part in supporting and educating patients to use LLMs responsibly. This is particularly challenging given that technology evolves so rapidly and novel opportunities continuously emerge.

It is crucial to keep the conversation on LLMs active, by talking openly, sharing information, and recognising that education is required by both clinicians and patients.

ELCAP Type 3 AI systems are those not interacting directly to end users, but they are deployed in the “background” at healthcare institutions. What are the peculiar challenges in the implementation of these systems?

These LLMs can run automated data extraction and analysis, identify eligible candidates for clinical trials, or perform repetitive tasks in place of humans, and they are already operating within EHR systems in many hospitals. It is very important that clinicians are aware of the use of these systems, and understand both their benefits and risks, as LLMs can either improve or impair workplace productivity depending on how key issues such as interoperability and privacy are addressed.

Currently, the healthcare landscape in the European Union (EU) is highly fragmented, and almost each hospital has its own EHR system. Some countries have implemented a single EHR at national level, and I think they are one step ahead of the others with the integration of LLMs in the “background” due to fewer interoperability challenges.

I hope the European Health Data Space (EHDS) Regulation will help the different countries and hospitals with the implementation and validation of LLMs, and also encourage collaboration between policymakers, clinicians, patient organisations, and AI companies. It entered into force in March 2025 and aims at establishing a common framework for the use and exchange of electronic health data across the EU and fostering the harmonisation of current discrepancies in data privacy and governance.

By complementing these new regulations with practical guidance, the ELCAP statements are intended to help countries and clinicians navigate this rapidly evolving and complex field.

AI & Digital Oncology: Resources in one place

Looking for further insights into how artificial intelligence and digital tools are impacting oncology? The ESMO AI & Digital Oncology Hub brings together expert perspectives, research updates, and thought leadership from across oncology.

It is a space where you can stay informed, discover resources, and follow the conversation on digital innovation in cancer research and treatment.

To further explore the transformative potential of AI in oncology, the very first ESMO AI and Digital Oncology Congress 2025, taking place from 12 to 14 November, will provide a dedicated platform focused on the latest advances in AI and digital technologies in cancer care.

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings
  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and you can only disable them by changing your browser preferences.