Abstract
Deep learning has revolutionized medical image analysis in cancer pathology, where it had a substantial clinical impact by supporting the diagnosis and prognostic rating of cancer. Among the first available digital resources in the field of brain cancer is glioblastoma, the most common and fatal brain cancer. At the histologic level, glioblastoma is characterized by abundant phenotypic variability that is poorly linked with patient prognosis. At the transcriptional level, 3 molecular subtypes are distinguished with mesenchymal-subtype tumors being associated with increased immune cell infiltration and worse outcome. We address genotype-phenotype correlations by applying an Xception convolutional neural network to a discovery set of 276 digital hematozylin and eosin (H&E) slides with molecular subtype annotation and an independent The Cancer Genome Atlas-based validation cohort of 178 cases. Using this approach, we achieve high accuracy in H&E-based mapping of molecular subtypes (area under the curve for classical, mesenchymal, and proneural = 0.84, 0.81, and 0.71, respectively; P < 0.001) and regions associated with worse outcome (univariable survival model P < 0.001, multivariable P = 0.01). The latter were characterized by higher tumor cell density (P < 0.001), phenotypic variability of tumor cells (P < 0.001), and decreased T-cell infiltration (P = 0.017). We modify a well-known convolutional neural network architecture for glioblastoma digital slides to accurately map the spatial distribution of transcriptional subtypes and regions predictive of worse outcome, thereby showcasing the relevance of artificial intelligence-enabled image mining in brain cancer.
Overview
- The study aims to investigate genotype-phenotype correlations in glioblastoma using an Xception convolutional neural network on a discovery set of 276 digital H&E slides with molecular subtype annotation and an independent validation cohort of 178 cases. The primary objective is to achieve high accuracy in H&E-based mapping of molecular subtypes and regions associated with worse outcome. The study seeks to answer the question of whether artificial intelligence-enabled image mining can accurately map the spatial distribution of transcriptional subtypes and regions predictive of worse outcome in glioblastoma.
Comparative Analysis & Findings
- The study compares the outcomes observed under different experimental conditions or interventions, specifically the use of an Xception convolutional neural network on digital H&E slides with molecular subtype annotation and an independent validation cohort of 178 cases. The results show high accuracy in H&E-based mapping of molecular subtypes (area under the curve for classical, mesenchymal, and proneural = 0.84, 0.81, and 0.71, respectively; P < 0.001) and regions associated with worse outcome (univariable survival model P < 0.001, multivariable P = 0.01). The regions associated with worse outcome were characterized by higher tumor cell density (P < 0.001), phenotypic variability of tumor cells (P < 0.001), and decreased T-cell infiltration (P = 0.017).
Implications and Future Directions
- The study's findings highlight the potential impact of artificial intelligence-enabled image mining on the diagnosis and prognostic rating of glioblastoma. The study identifies regions associated with worse outcome in glioblastoma, which can be used to improve the accuracy of diagnosis and prognosis. The study also identifies molecular subtypes in glioblastoma, which can be used to personalize treatment. The study suggests future research directions, such as integrating other imaging modalities, exploring other molecular subtypes, and developing predictive models for clinical decision-making.