Computational models can overcome current limitations of 3D analysis of tumour samples
Human tissue is inherently three-dimensional (3D). In oncology, however, we still rely on two-dimensional (2D) snapshots from tumour slides. While this might be sufficient for many diagnoses, it has become clear that diagnostic interpretation can often change when the full 3D tissue volume is examined. A lot of discovery potential is left on the table by sampling tissue in 2D, and historically there have been few computational methods capable of analysing true 3D data.
At the Mahmood Lab - where we primarily work on computational pathology and integrate data from multiple modalities to enable early diagnosis, prognosis, and prediction of treatment response - we set out to build such methods. Our goal was to analyse 3D data effectively. We developed an approach that allows whole-tissue imaging using widely available modalities such as micro-CT scanners or open-top light-sheet microscopy, which is becoming the de facto standard for 3D tissue imaging. We then apply multi-instance learning–based classification in a fully 3D framework. While this poses substantial computational challenges, it enables us to capture morphological representations across the full 3D tissue volume.
In a study published last year (Cell. 2024 May 9;187(10):2502–2520.e17 ), we demonstrated that analysing larger tissue volumes improves patient stratification, separating patients more clearly into distinct risk groups.
This raises the question: can we perform AI-driven 3D spatial transcriptomics? Spatial transcriptomics today is extremely expensive - it may cost over $100,000 per sample. But because tissue morphology and spatial transcriptomic patterns are correlated, one could imagine imaging tissue in 3D and performing only minimal spatial transcriptomic profiling. By fine-tuning a pre-trained model on large numbers of paired spatial transcriptomic and histology datasets - what we refer to as inpatient fine-tuning -we can generate highly resolved, cost-effective 3D spatial transcriptomic predictions. Importantly, these predictions do not need to be perfectly accurate to reveal meaningful biological trends.
Fine-tuning is key to this approach: it improves inference performance and enables robust analysis of both inter- and intra-tumoural heterogeneity in 3D.
More recently, we have scaled this up to higher spatial resolutions. Using open-top light-sheet microscopy alongside high-resolution spatial transcriptomic data, we are exploring whether we can infer expression patterns at even finer detail, which could further enhance performance and accelerate discovery. We are currently applying this to prostate cancer, and we expect that the true impact will become clearer as the broader community begins to adopt and test these methods.
[Extract from Keynote lecture 2 at ESMO AI & Digital Oncology Congress 2025, Berlin, Germany]