RPS 1405a - Artificial intelligence and machine learning in the brain

RPS 1405a-K
11:15
Keynote lecture
RPS 1405a-2
11:25
Machine learning-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas
Purpose: To evaluate the value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status in lower-grade gliomas (LGG), using various ML algorithms.
Methods: For this retrospective study, 107 patients with LGG were included from a public database. 10 different training and unseen test data splits were created using stratified random sampling. Radiomic features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images. Dimension reduction was done using collinearity analysis and feature selection. Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise posthoc analyses were used for comparison of classification performances based on the area under the curve (AUC) metric.
Results: Overall, the performance of the ML algorithms was statistically significantly different, ?2(6) = 40.46, p<0.001. In the pairwise analysis, 5 algorithms outperformed others, adjusted p<0.05. Mean AUC and accuracy values for the top 5 algorithms ranged from 0.813 to 0.871 and from 79.4% to 81.9%, respectively, with no statistically significant difference, adjusted p>0.05. The ML algorithm with the highest mean rank and stability was naive Bayes with a mean AUC and accuracy of 0.869 and 80.6%, respectively.
Conclusion: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status in LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified.
Limitations: Potential and most important limitations of the study are retrospective study design, dependency on the limited data on the public database, and a lack of other MRI sequences.
Ethics: No ethics committee approval was obtained because this work is based on a publicly available database.
Funding: No funding was received for this work.
RPS 1405a-3
11:31
Prediction for the grading of stereotactic biopsy glioma targets based on preoperative MRI textural analysis (recorded)
Purpose: To explore the value of textural analysis based on T1-weighted brain volume with gadolinium contrast enhancement (T1 BRAVO+C) images for the grading of glioma targets by stereotactic biopsy.
Methods: A total of 36 diffuse glioma cases and 64 puncture targets were included in the study. All patients underwent a preoperative MR scan and intraoperative MR-guided stereotactic puncture biopsy. All cases had a histopathological diagnosis of WHO grade II or III diffuse gliomas. ROIs consistent with puncture targets were delineated on T1 BRAVO+C images and texture features were automatically calculated using Omni Kinetics software. Mann-Whitney rank-sum test was used to analyse texture differences between grade II and III ROIs. ROC curves evaluated the diagnostic value of textural analysis for grading glioma targets. The cutoff value was set according to the Youden index.
Results: Texture features, including max intensity (P=0.001), 95th quantile (0.002), range (< 0.001), variance (< 0.001), standard deviation (< 0.001), sum variance (0.022), and cluster prominence (< 0.001) were higher in grade III gliomas than grade II. Whereas, grade II gliomas showed increased uniformity (P=0.001) and short-run low grey-level emphasis values (0.018). Diagnostic efficiency of high-order grey-level run-length matrix features was slightly lower than first- and second-order features. AUC was 0.887 (95% confidence interval: 0.805-0.969, P<0.001) with combined texture features.
Conclusion: Textural analysis of T1 BRAVO+C images is valuable for grading glioma (WHO II and III) and may help in guiding artificial intelligence selection of preoperative puncture targets.
Limitations: Biopsy samples couldn't achieve 100% ?point to point?.
Ethics: Institutional Review Board approval was obtained. All included patients signed written informed consent.
Funding: No funding was received for this work.
RPS 1405a-4
11:37
Glioma segmentation in sparse label applications: a federated learning solution
Purpose: Accurate tissue segmentations are essential for clinical applicability. We propose an improved approach to segmentation using federated learning for the decentralised training of a convolutional neural network (CNN) on heterogeneous MRI datasets of glioma patients.
Methods: We split the BRATS-dataset (braintumorsegmentation.org) into three virtual hospitals (VHs) and added a fourth VH with data from a subsample of 121 patients of a publicly available dataset (figshare.com/articles/brain_tumor_dataset/1512427). Two VHs (54 and 69 patients respectively) have contrast-enhanced T1-weighted MRIs with five classes [necrosis (N), oedema (E), enhancing tumour tissue (ETT), non-enhancing tumour tissue (N-ETT), and background]. One VH has only a binary segmentation map (61 images). The last VH has only unlabelled data (120 images). We trained the CNN with a federated learning setup without central data aggregation. In each VH where segmentations were available, a segmentation model was trained. An autoencoder was trained in each VH to learn volume reconstruction. Noise and rotation were used for data augmentation. In the federated learning merging process of the different models, only the decoder path of the CNN was combined into a global model.
Results: For the evaluation, we used the Dice similarity score and evaluated the performance classwise. The CNN trained with federated learning [evaluation score: 0.43 (N), 0.56 (E), 0.34 (ETT), and 0.74 (N-ETT)] outperformed results obtained with a baseline model [0.28 (N), 0.38 (E), 0.29 (ETT), and 0.54 (N-ETT)] trained in a single VH.
Conclusion: The proposed federated learning setup is superior for training CNNs on radiological data with sparse labelling, improving performance, and allowing for the use of larger datasets without compromising patient confidentiality.
Limitations: The experiments are limited to a single modality.
Ethics: n/a
Funding: No funding was received for this work.
RPS 1405a-5
11:43
Deep learning radiomics algorithm for glioma (DRAG) for predicting survival in gliomas
Purpose: Segmentation of brain tumours from multi-modal MR imaging remains a challenge and deep learning has a potential role in diagnosis, prognosis, and survival prediction. The project aimed at tumour segmentation and finding potential radiomic features for predicting overall survival.
Methods: The proposed method was trained and validated on BRATS 2018 dataset. We developed a patch-based 3D-Unet model for tumour segmentation and evaluated the efficiency of radiomic features for overall survival prediction. Radiomic features were extracted from all four MR modalities for OS prediction. The training dataset included 210 high-grade-gliomas (HGG) and 75 low-grade-gliomas (LGG), while the validation set consisted of 66 cases. The trained model was validated on 191 sets of patient data from our hospital.
Results: All 285 training datasets were used in the model training process. The results were based on all 46 validations dataset. The final mean Dice indexes of the enhanced tumour (ET), whole tumour (WT), and tumour core (TC) were 0.75, 0.89, and 0.81, which shows our approach outperforms other submissions of the BRATS18 challenge. The method achieved good performance with Dice scores of 0.88, 0.83, and 0.75 for whole tumour, tumour core, and enhancing tumour, respectively. For the prediction of survival categories (<300 & >=300 days), the neural network demonstrated an accuracy of 70.2% in the training subset and 62.5 and 63.6% in the validation and testing subsets, respectively. The accuracy was 73% for the entire training dataset. The AUC was 0.799.
Conclusion: Our study demonstrates that transfer learning-based deep features are able to generate prognostic imaging signatures for OS prediction and patient stratification for GBM, indicating the potential of deep imaging feature-based biomarkers in the preoperative care of GBM patients.
Limitations: n/a
Ethics: Ethics committee approval obtained.
Funding: No funding was received for this work.
RPS 1405a-6
11:49
Smart protocol: real-time brain MRI pathology detection by deep learning for online protocol control
Purpose: Brain MRI protocols are determined prior to the patient entering the scanner and information collected during scanning rarely influences the scanning protocol. Real-time detection of pathologies may guide the choice of MRI sequences most informative for diagnosis while the patient is still in the scanner. Hereby, the scanner may be used optimally, patient discomfort minimised, and downstream reading and reporting prioritised.
Methods: 2 million radiology reports were automatically scanned using natural language processing for pathologies, selecting 5,000 brain MRI studies obtained in collaboration with Medall Diagnostics, India, reflecting the most predominant pathologies; infarcts (hyperacute and acute) and tumours. Infarcts and tumour pathologies were annotated pixel-wise by trained annotators under a radiologist?s supervision and quality control. Two sets of MRI brain protocols for clinically normal patients, patients with tumours, and patients with infarcts (and both) were established: A) standard clinical protocol and B) smart protocol (consisting of 4 base sequences and up to 2 additional specialised pathology specific sequences).
Results: On an independent dataset of 88 scans, the turn around time from scanning a sequence to reporting results back to the hospital system was less than 60 seconds. The specificity and sensitivity for detection was for tumour 95% (88-99%) and 78% (52-94%), and for infarcts 75% (63-85%) and 100% (83-100%). In a study with simulated protocols, on an average, 1.25 fewer sequences were acquired per patient and an overall 0.23 specialised sequences were missed for patients with pathology.
Conclusion: Turn around time is sufficiently low to influence protocol selection in clinical practice. Accuracy and fewer sequences acquired allow for the informing of the MRI operator, potentially saving scanner time, contrast administration, and patient recall. This has led to a trial installation in several hospitals.
Limitations: n/a
Ethics: n/a
Funding: No funding was received for this work.
RPS 1405a-7
11:55
Classifying brain metastatic disease by an unknown cancer primary organ site using whole-brain clinical MRI data: a 3D convolutional neural network approach
Purpose: Treatment decisions for brain metastatic disease are driven by knowledge of primary organ site cancer histology, which can require invasive biopsy. We propose an automated deep learning algorithm and image-preprocessing pipeline for rapid non-invasive imaging-based identification of brain metastatic tumour histology based on conventional whole-brain T1-weighted MRI data. Using whole-brain data obviates the need for brain tumour segmentation, which can be time intensive. We hypothesise that whole-brain imaging features will be sufficiently discriminative, allowing for accurate diagnosis of the primary organ site of malignancy.
Methods: This single-site retrospective diagnostic study was comprised of patients (n=1,302) referred for gamma knife radiosurgery from July 2000 to May 2019. Contrast-enhanced T1-weighted brain MRI exams (n=2,104 MRIs) acquired from these patients were minimally preprocessed (voxel resampling and signal intensity rescaling/normalisation), requiring only seconds per MRI dataset, and used to train a 3D convolutional neural network (CNN) for determining the primary organ site associated with brain metastatic disease in one of three classes (breast, lung, and melanoma).
Results: After nested 10-fold cross-validation, our algorithm achieved best AUCs of 0.987 [95%CI: 0.983,0.991] (breast vs lung) and 0.988 [95%CI: 0.984,0.991] (lung vs melanoma). Although breast versus melanoma demonstrated low AUC with images alone (0.550), the algorithm performed better after the incorporation of demographic data (AUC = 0.704).
Conclusion: Our results demonstrate a robust CNN algorithm for effectively classifying metastatic tumour histology types for breast and lung based on conventional whole-brain MRI, without need for tumour segmentation. Further refinement may offer an invaluable tool to expedite primary organ site cancer identification for brain metastatic disease and perhaps improve patient outcomes and survival.
Limitations: Limitations include relatively small sample size and our retrospective approach from a single institution.
Ethics: IRB-approved.
Funding: NIH:P30CA01219, P01CA207206, R01CA074145.
RPS 1405a-8
12:01
Can we predict a brain metastases primary site by using deep learning algorithms, even in small datasets?
Purpose: To investigate the feasibility of deep learning algorithms in the classification of brain metastases according to their origin.
Methods: 177 patients with brain metastases from lung cancer (n=99), breast cancer (n=44), and other cancer types (n=37) were evaluated in our single-centre retrospective study. The dataset was derived by radiologists from pretreatment brain MR images of patients including 4 sequences: pre- and postcontrast T1-weighted spin-echo (T1W SE), fluid-attenuated inversion recovery (FLAIR), and apparent diffusion coefficient (ADC) maps. Since the sequence plans were different from each other, a 4-path convolutional neural network (CNN) had been developed in which the sequences images were given separately to the algorithm. We used 124 patients? data for training and 53 patients? data for testing in our 3-class (lung, breast, and others) and 2-class (lung and breast) CNN models.
Results: The accuracy of classification in the 3-class model was 74.07%, the area under the ROC curve (AUC) for lung cancer class was 0.87, for breast cancer 0.89, and for the others group 0.75. The two-class model had a higher accuracy of 81.48% and AUC of 0.76.
Conclusion: Deep learning algorithms using multisequence MRI can give satisfactory results in classifying brain metastases even in small datasets. What about with ?big data??
Limitations: A small dataset.
Ethics: The study was approved by Kocaeli University institutional review board.
Funding: No funding was received for this work.
RPS 1405a-9
12:07
Deep convolutional neural network for automated segmentation of brain metastasis trained on clinical data acquired during six years of stereotactic radiosurgery
Purpose: Deep convolutional neural networks (DCNN) have demonstrated enormous performance in many segmentation tasks in medical imaging. Brain metastasis, with their large variability in imaging appearance, remain a challenge. To properly evaluate the clinical performance of a state-of-the algorithm in this task, we collected a clinically representative set of imaging data acquired for stereotactic radiosurgeries between 2013 and 2019.
Methods: Registered MR images (contrast-enhanced T1, T2, and FLAIR) and the contour data containing the delineated lesions were restored from our treatment planning system. The data (509 patients with 1,223 metastasis) was split into a training (469 patients) and a test (40 patients) set. Ground truth segmentations on the test data were individually checked by a senior oncologist with 25 years of experience (MK). In addition to a conventional U-Net, a U-Net with multiple outputs (moU-Net) and a U-Net (sU-Net) only trained on small lesions (< 0.4ml) were employed.
Results: The U-Net, moU-Net, and the sU-Net detected brain metastasis with a sensitivity of 69%, 69%, and 51%, respectively. The sU-Net performed better at detecting small lesions (64% sensitivity) compared to the conventional U-Net (48%) and the moU-Net (48%). An ensemble of those networks had a sensitivity of 79%/74%, with a mean false-positive rate of 0.8/0.175 and a mean Dice score of 0.7/0.71, depending on if the segmentations were merged through summation or averaging.
Conclusion: We demonstrated that DCNNs trained on data collected during clinical practice produce state-of-the-art results and that in-house development of such algorithms is a feasible option. It was furthermore shown that a single network fails to generalise and that an ensemble of differently trained networks is superior.
Limitations: A single-centre study.
Ethics: n/a
Funding: No funding was received for this work.
RPS 1405a-10
12:13
CNN based deep learning enhances 3D FLAIR brain perceived quality, SNR, and resolution at ~30% less scan time
Purpose: To evaluate the capability of deep learning (DL)-based image processing of brain MRI to improve quality while reducing acquisition times.
Methods: With IRB approval and patient consent, 11 patients (age: 48+/-15 years; 7 female) undergoing clinical brain 1.5T MRI exams underwent an accelerated sagittal 3DFLAIR scan (average scan time reduction 27.1%+/-3.5%) in addition to the institution?s routine protocol, which included a submillimeter isotropic 3DFLAIR. A third image set was created by processing the faster series with an FDA-cleared CNN-based DL algorithm (SubtleMR?). The 3 sets (standard series (SS), accelerated series (AS), and DL processed accelerated series (DL)) were randomised and presented side-by-side for pair-wise comparisons (33) and evaluated for relative (1) image sharpness, (2) perceived SNR, and (3) lesion/anatomy conspicuity. Each series was also independently scored on overall quality. A two-sided paired t-test was performed for overall image quality, with P<0.05 considered statistically significant. Average image preference and 95% confidence interval were calculated for each paired series and reader.
Results: Overall quality scores (SS/AS/DL) were 4.0/3.1/5.0, 4.0/3.2/5.0, and 4.8/4.5/5.0 for reader 1-3. Paired t-test results suggested that DL is significantly better than SS (P<0.05) for reader 1 and 2, but not for reader 3 (P=0.10). When presented side-by-side, DL is superior (significantly superior for reader 1 and 2, mildly superior for reader 3) for image sharpness, perceived SNR, and lesion/pathology conspicuity when compared with SS or AS.
Conclusion: CNN-based DL image processing of 3DFLAIR brain MRI produces a boost in perceived image quality, SNR, and resolution despite a ~30% reduction in scan time.
Limitations: A limited number of subjects and imaging methods tested.
Ethics: Approved by IRB.
Funding: No funding was received for this work.
RPS 1405a-11
12:19
Brain metastases in malignant melanoma: fully automated detection and segmentation on MRI using a deep learning model
Purpose: Given the growing demand for magnetic resonance imaging (MRI) of the head in patients with malignant melanoma and consecutively associated workload, physician fatigue with the inherent risk of missed diagnosis poses a relevant concern. The automatisation of detection and segmentation of brain metastases could serve as a tool for lesion preselection and assessment of therapeutic success in an oncological follow-up. The purpose of this study was the development and evaluation of a deep learning model (DLM) for fully automated detection and segmentation of brain metastases in melanoma patients on multiparametric MRI, including heterogenous data from different institutions and scanners.
Methods: In this retrospective study, we included MRI scans (05/2013-10/2018; T1-/T2-weighted, T1-weighted contrast-enhanced (T1CE), T2-weighted fluid-attenuated inversion recovery) from 54 melanoma patients (mean age 63.54±13.83 years, 24 females) with 102 metastases at initial diagnosis. Independent manual segmentations of the metastases (based on T1CE) in a voxel-wise manner by two radiologists provided the ground truth of metastases count and segmentation. A 3D convolutional neural network (DeepMedic, BioMedIA) initially trained on glioblastomas was used receiving additional dedicated training applying five-fold cross-validation (5-FCV). For comparison of segmentation accuracies, Dice coefficients were calculated.
Results: The mean size of metastases was 2.35±7.81 cm3 [range: 0.003?66.6 cm3]. The glioblastoma DLM achieved a detection rate of 0.47 and a reasonable segmentation accuracy (median Dice: 0.64). After 5-FCV training, detection rate 0.87 (p<0.001) and segmentation accuracy (median Dice: 0.74, p<0.05) increased.
Conclusion: After dedicated training, our DLM detects brain metastases of malignant melanoma on multiparametric MRI with high accuracy. Despite small lesion size and heterogenous scanner data, automated segmentation achieved good volumetric accuracy compared to manual segmentations.
Limitations: A retrospective study.
Ethics: Ethics committee approval obtained and consent waived.
Funding: No funding was received for this work.

Moderators

Marleen De Bruijne (Netherlands)

Asif Mazumder (London/GB)

European Society of Radiology

Watch ECR 2020 live

This session is part of ECR 2020 Live. Please register for ECR 2020 Live in order to get access.

  • ECR LIVE
    WITHOUT CME €39.00
  • ECR LIVE
    PARTICIPATE
    FROM HOME €350.00