Artificial intelligence and machine learning in the brain - ESR Connect

Research Presentation Session

RPS 1405a - Artificial intelligence and machine learning in the brain

  • 6 Lectures
  • 34 Minutes
  • 6 Speakers

Lectures

1
RPS 1405a - Machine learning-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas

RPS 1405a - Machine learning-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas

06:11B. Kocak, Istanbul / TR

Purpose:

To evaluate the value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status in lower-grade gliomas (LGG), using various ML algorithms.

Methods and materials:

For this retrospective study, 107 patients with LGG were included from a public database. 10 different training and unseen test data splits were created using stratified random sampling. Radiomic features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images. Dimension reduction was done using collinearity analysis and feature selection. Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise posthoc analyses were used for comparison of classification performances based on the area under the curve (AUC) metric.

Results:

Overall, the performance of the ML algorithms was statistically significantly different, χ2(6) = 40.46, p<0.001. In the pairwise analysis, 5 algorithms outperformed others, adjusted p<0.05. Mean AUC and accuracy values for the top 5 algorithms ranged from 0.813 to 0.871 and from 79.4% to 81.9%, respectively, with no statistically significant difference, adjusted p>0.05. The ML algorithm with the highest mean rank and stability was naive Bayes with a mean AUC and accuracy of 0.869 and 80.6%, respectively.

Conclusion:

The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status in LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified.

Limitations:

Potential and most important limitations of the study are retrospective study design, dependency on the limited data on the public database, and a lack of other MRI sequences.

Ethics committee approval

No ethics committee approval was obtained because this work is based on a publicly available database.

Funding:

No funding was received for this work.

2
RPS 1405a - Prediction for the grading of stereotactic biopsy glioma targets based on preoperative MRI textural analysis

RPS 1405a - Prediction for the grading of stereotactic biopsy glioma targets based on preoperative MRI textural analysis

05:55W. Rui, Shanghai / CN

Purpose:

To explore the value of textural analysis based on T1-weighted brain volume with gadolinium contrast enhancement (T1 BRAVO+C) images for the grading of glioma targets by stereotactic biopsy.

Methods and materials:

A total of 36 diffuse glioma cases and 64 puncture targets were included in the study. All patients underwent a preoperative MR scan and intraoperative MR-guided stereotactic puncture biopsy. All cases had a histopathological diagnosis of WHO grade II or III diffuse gliomas. ROIs consistent with puncture targets were delineated on T1 BRAVO+C images and texture features were automatically calculated using Omni Kinetics software. Mann-Whitney rank-sum test was used to analyse texture differences between grade II and III ROIs. ROC curves evaluated the diagnostic value of textural analysis for grading glioma targets. The cutoff value was set according to the Youden index.

Results:

Texture features, including max intensity (P=0.001), 95th quantile (0.002), range (< 0.001), variance (< 0.001), standard deviation (< 0.001), sum variance (0.022), and cluster prominence (< 0.001) were higher in grade III gliomas than grade II. Whereas, grade II gliomas showed increased uniformity (P=0.001) and short-run low grey-level emphasis values (0.018). Diagnostic efficiency of high-order grey-level run-length matrix features was slightly lower than first- and second-order features. AUC was 0.887 (95% confidence interval: 0.805-0.969, P<0.001) with combined texture features.

Conclusion:

Textural analysis of T1 BRAVO+C images is valuable for grading glioma (WHO II and III) and may help in guiding artificial intelligence selection of preoperative puncture targets.

Limitations:

Biopsy samples couldn't achieve 100% “point to point”.

Ethics committee approval

Institutional Review Board approval was obtained. All included patients signed written informed consent.

Funding:

No funding was received for this work.

3
RPS 1405a - Smart protocol: real-time brain MRI pathology detection by deep learning for online protocol control

RPS 1405a - Smart protocol: real-time brain MRI pathology detection by deep learning for online protocol control

06:16R. Kashyape, NASHIK / IN

Purpose:

Brain MRI protocols are determined prior to the patient entering the scanner and information collected during scanning rarely influences the scanning protocol. Real-time detection of pathologies may guide the choice of MRI sequences most informative for diagnosis while the patient is still in the scanner. Hereby, the scanner may be used optimally, patient discomfort minimised, and downstream reading and reporting prioritised.

Methods and materials:

2 million radiology reports were automatically scanned using natural language processing for pathologies, selecting 5,000 brain MRI studies obtained in collaboration with Medall Diagnostics, India, reflecting the most predominant pathologies; infarcts (hyperacute and acute) and tumours. Infarcts and tumour pathologies were annotated pixel-wise by trained annotators under a radiologist’s supervision and quality control. Two sets of MRI brain protocols for clinically normal patients, patients with tumours, and patients with infarcts (and both) were established: A) standard clinical protocol and B) smart protocol (consisting of 4 base sequences and up to 2 additional specialised pathology specific sequences).

Results:

On an independent dataset of 88 scans, the turn around time from scanning a sequence to reporting results back to the hospital system was less than 60 seconds. The specificity and sensitivity for detection was for tumour 95% (88-99%) and 78% (52-94%), and for infarcts 75% (63-85%) and 100% (83-100%). In a study with simulated protocols, on an average, 1.25 fewer sequences were acquired per patient and an overall 0.23 specialised sequences were missed for patients with pathology.

Conclusion:

Turn around time is sufficiently low to influence protocol selection in clinical practice. Accuracy and fewer sequences acquired allow for the informing of the MRI operator, potentially saving scanner time, contrast administration, and patient recall. This has led to a trial installation in several hospitals.

Limitations:

/a

Ethics committee approval

/a

Funding:

No funding was received for this work.

4
RPS 1405a - Can we predict a brain metastases primary site by using deep learning algorithms, even in small datasets?

RPS 1405a - Can we predict a brain metastases primary site by using deep learning algorithms, even in small datasets?

04:24B. ALPARSLAN, Kocaeli / TR

Purpose:

To investigate the feasibility of deep learning algorithms in the classification of brain metastases according to their origin.

Methods and materials:

177 patients with brain metastases from lung cancer (n=99), breast cancer (n=44), and other cancer types (n=37) were evaluated in our single-centre retrospective study. The dataset was derived by radiologists from pretreatment brain MR images of patients including 4 sequences: pre- and postcontrast T1-weighted spin-echo (T1W SE), fluid-attenuated inversion recovery (FLAIR), and apparent diffusion coefficient (ADC) maps. Since the sequence plans were different from each other, a 4-path convolutional neural network (CNN) had been developed in which the sequences images were given separately to the algorithm. We used 124 patients’ data for training and 53 patients’ data for testing in our 3-class (lung, breast, and others) and 2-class (lung and breast) CNN models.

Results:

The accuracy of classification in the 3-class model was 74.07%, the area under the ROC curve (AUC) for lung cancer class was 0.87, for breast cancer 0.89, and for the others group 0.75. The two-class model had a higher accuracy of 81.48% and AUC of 0.76.

Conclusion:

Deep learning algorithms using multisequence MRI can give satisfactory results in classifying brain metastases even in small datasets. What about with ‘big data?’

Limitations:

A small dataset.

Ethics committee approval

The study was approved by Kocaeli University institutional review board.

Funding:

No funding was received for this work.

5
RPS 1405a - CNN based deep learning enhances 3D FLAIR brain perceived quality, SNR, and resolution at ~30% less scan time

RPS 1405a - CNN based deep learning enhances 3D FLAIR brain perceived quality, SNR, and resolution at ~30% less scan time

06:15L. Tanenbaum, New York City / US

Purpose:

To evaluate the capability of deep learning (DL)-based image processing of brain MRI to improve quality while reducing acquisition times.

Methods and materials:

With IRB approval and patient consent, 11 patients (age: 48+/-15 years; 7 female) undergoing clinical brain 1.5T MRI exams underwent an accelerated sagittal 3DFLAIR scan (average scan time reduction 27.1%+/-3.5%) in addition to the institution’s routine protocol, which included a submillimeter isotropic 3DFLAIR. A third image set was created by processing the faster series with an FDA-cleared CNN-based DL algorithm (SubtleMR™). The 3 sets (standard series (SS), accelerated series (AS), and DL processed accelerated series (DL)) were randomised and presented side-by-side for pair-wise comparisons (33) and evaluated for relative (1) image sharpness, (2) perceived SNR, and (3) lesion/anatomy conspicuity. Each series was also independently scored on overall quality. A two-sided paired t-test was performed for overall image quality, with P<0.05 considered statistically significant. Average image preference and 95% confidence interval were calculated for each paired series and reader.

Results:

Overall quality scores (SS/AS/DL) were 4.0/3.1/5.0, 4.0/3.2/5.0, and 4.8/4.5/5.0 for reader 1-3. Paired t-test results suggested that DL is significantly better than SS (P<0.05) for reader 1 and 2, but not for reader 3 (P=0.10). When presented side-by-side, DL is superior (significantly superior for reader 1 and 2, mildly superior for reader 3) for image sharpness, perceived SNR, and lesion/pathology conspicuity when compared with SS or AS.

Conclusion:

CNN-based DL image processing of 3DFLAIR brain MRI produces a boost in perceived image quality, SNR, and resolution despite a ~30% reduction in scan time.

Limitations:

A limited number of subjects and imaging methods tested.

Ethics committee approval

Approved by IRB.

Funding:

No funding was received for this work.

6
RPS 1405a - Brain metastases in malignant melanoma: fully automated detection and segmentation on MRI using a deep learning model

RPS 1405a - Brain metastases in malignant melanoma: fully automated detection and segmentation on MRI using a deep learning model

05:15L. Pennig, Cologne / DE

Purpose:

Given the growing demand for magnetic resonance imaging (MRI) of the head in patients with malignant melanoma and consecutively associated workload, physician fatigue with the inherent risk of missed diagnosis poses a relevant concern. The automatisation of detection and segmentation of brain metastases could serve as a tool for lesion preselection and assessment of therapeutic success in an oncological follow-up. The purpose of this study was the development and evaluation of a deep learning model (DLM) for fully automated detection and segmentation of brain metastases in melanoma patients on multiparametric MRI, including heterogenous data from different institutions and scanners.

Methods and materials:

In this retrospective study, we included MRI scans (05/2013-10/2018; T1-/T2-weighted, T1-weighted contrast-enhanced (T1CE), T2-weighted fluid-attenuated inversion recovery) from 54 melanoma patients (mean age 63.54±13.83 years, 24 females) with 102 metastases at initial diagnosis. Independent manual segmentations of the metastases (based on T1CE) in a voxel-wise manner by two radiologists provided the ground truth of metastases count and segmentation. A 3D convolutional neural network (DeepMedic, BioMedIA) initially trained on glioblastomas was used receiving additional dedicated training applying five-fold cross-validation (5-FCV). For comparison of segmentation accuracies, Dice coefficients were calculated.

Results:

The mean size of metastases was 2.35±7.81 cm3 [range: 0.003–66.6 cm3]. The glioblastoma DLM achieved a detection rate of 0.47 and a reasonable segmentation accuracy (median Dice: 0.64). After 5-FCV training, detection rate 0.87 (p<0.001) and segmentation accuracy (median Dice: 0.74, p<0.05) increased.

Conclusion:

After dedicated training, our DLM detects brain metastases of malignant melanoma on multiparametric MRI with high accuracy. Despite small lesion size and heterogenous scanner data, automated segmentation achieved good volumetric accuracy compared to manual segmentations.

Limitations:

A retrospective study.

Ethics committee approval

Ethics committee approval obtained and consent waived.

Funding:

No funding was received for this work.

Speakers

Presenter

Lawrence Neil Tanenbaum

New York City, United States

Presenter

Rohan Kashyape

NASHIK, India

Presenter

Lenhard Pennig

Cologne, Germany

Presenter

Burak Kocak

Istanbul, Turkey

Presenter

Wenting Rui

Shanghai, China

Presenter

Burcu ALPARSLAN

Kocaeli, Turkey