Research Presentation Session

RPS 1805 - Deep learning based scanning, image reconstruction, and quality assurance

Lectures

1
RPS 1805 - Elevating clinical brain and spine MR image quality with deep learning reconstruction

RPS 1805 - Elevating clinical brain and spine MR image quality with deep learning reconstruction

06:34L. Tanenbaum, New York City / US

Purpose:

In the quest for increasing image quality, MR throughput can suffer and manoeuvre, which creates faster scans trade-off quality. There is a need to enhance images without prolonging scan time. Recently, deep learning-based reconstruction methods have shown promise to enhance image value. We evaluated the impact of a new deep learning image reconstruction (DLR) method for both noise reduction and improved image sharpness in clinical MR exams of the brain and spine.

Methods and materials:

The investigational DLR leverages a deep convolutional residual encoder network trained on a >10K image database to create images with enhanced SNR and spatial resolution. 28 patients were scanned using clinical 2D brain (3T-7 and 1.5T-4) or spine (3T-12 and 1.5T-5) protocols. K-space data was reconstructed with both conventional and DLR (tuned to 75% noise reduction). Two neuroradiologists independently rated 93 pairs of conventional and DLR images side-by-side. Ratings were based on overall IQ, lesion conspicuity, perceived SNR and resolution, CNR, image texture, and artefact using a 5-point Likert scale (5=excellent, 1=non-diagnostic). A Wilcoxon signed-rank test was used to compare the ratings and inter-rater reliability between readers was assessed using the Bennett S score.

Results:

DLR showed statistically significant improvement over conventional images in overall image quality (4.74±0.49 vs 3.27±0.70, p<0.05), lesion conspicuity (4.65±0.49 vs 3.24±0.52, p<0.05), contrast (4.59±0.61 vs 3.50±0.59, p<0.05), perceived resolution (4.66±0.61 vs 3.36±0.59, p<0.05), perceived SNR (4.72±0.60 vs 3.33±0.53, p<0.05), and image texture (4.66±0.60 vs 3.13±0.38, p<0.05). There was a substantial inter-rater agreement with an average S score of 0.66.

Conclusion:

Overall IQ improved with DLR with higher perceived SNR, CNR, and spatial resolution compared to the conventional method. Future work will assess whether this technique can accelerate acquisitions while preserving quality.

Limitations:

A limited series.

Ethics committee approval

Approved with subject consent.

Funding:

No funding was received for this work.

2
RPS 1805 - The effect of deep learning reconstruction on image quality in chest CT

RPS 1805 - The effect of deep learning reconstruction on image quality in chest CT

08:02J. Schuzer, Bethesda / US

Purpose:

To investigate the effect of a deep learning image reconstruction algorithm on image quality in chest CT scans.

Methods and materials:

With institutional ethics approval, 100 consecutive patients underwent chest CT at standard radiation doses on a 320-detector row CT scanner with the following scan parameters: Helical scan, 0.5 mm x 80 detector rows, 120 or 100kV, and automatic exposure control, 0.275s rotation speed and standard pitch. Each scan was reconstructed at 0.5 mm volume and 3 mm axial, coronal, and sagittal slices with both lung and soft tissue kernels using the clinical standard hybrid IR (AIDR3D) and deep learning reconstruction (AiCE) techniques. Images were evaluated for overall image quality, noise, presence of artefacts, contrast, visibility of small structures, and diagnostic confidence using a 4-point Likert scale. SNR and CNR were calculated for each reconstruction. Data was analysed by a paired t-test.

Results:

Patients averaged 51.5 years (range 26-78), 77% were female with mean body mass index 27.5±7.1 kg/m2. In image quality, noise, contrast, and visibility of small structures, AiCE performed better than AIDR (3.71 vs 3.06, p <0001; 3.84 vs 2.95, p <0001, 3.66 vs 3.12, p <0001; 3.75 vs 3.16, p < 0001, respectively). The presence of artefacts and diagnostic confidence was not statistically significant for both reconstruction techniques (3.01 vs 2.98, p = 0.49; 4 vs 3.96, p = 0.32, respectively). Signal-to-noise for AiCE images was higher than AIDR3D images (21.0 vs. 16.3 respectively, p <0001). Contrast-to-noise for AiCE images was higher than AIDR3D images (29 vs 23.4 respectively, p <0001).

Conclusion:

Deep learning reconstruction improves image quality in chest CT.

Limitations:

A single-centre study.

Ethics committee approval

IRB approval National Institutes of Health, USA.

Funding:

No funding was received for this work.

3
RPS 1805 - The effect of deep learning reconstruction on image quality in abdominal CT

RPS 1805 - The effect of deep learning reconstruction on image quality in abdominal CT

07:29J. Schuzer, Bethesda / US

Purpose:

To investigate the effect of a deep learning image reconstruction algorithm on image quality in abdomen CT scans.

Methods and materials:

With institutional ethics approval, 100 consecutive patients underwent abdominal CT at standard radiation doses on a 320-detector row CT scanner with the following scan parameters: Helical scan, 0.5 mm x 80 detector rows, 120 or 100kV, and automatic exposure control, 0.5s rotation speed and standard pitch. Each scan was reconstructed at 0.5 mm volume and 3 mm axial, coronal, and sagittal slices with a soft tissue kernel using the clinical standard hybrid IR (AIDR3D) and deep learning reconstruction (AiCE) techniques. Images were evaluated for overall image quality, noise, presence of artefacts, contrast, visibility of small structures, and diagnostic confidence using a 4-point Likert scale. SNR and CNR was calculated for each reconstruction. Data was analysed by a paired t-test.

Results:

Patients averaged 51.1 years (range 19-78), 77% were female with mean BMI 27.5±7.1. In image quality, noise, contrast, and visibility of small structures, AiCE performed better than AIDR (3.69 vs. 3.08, p <0.001; 3.87 vs. 2.84, p <0.001, 3.72 vs. 3.17, p <0.001; 3.87 vs. 3.25, p < 0.001, respectively). The presence of artefacts and diagnostic confidence was not statistically significant for both reconstruction techniques (2.98 vs 2.95, p = 0.26; 3.98 vs. 3.95, p = 0.26, respectively). Signal-to-noise for AiCE images was higher than AIDR3D images (4.58 vs. 3.24 respectively, p <0.001). Contrast-to-noise for AiCE images was higher than AIDR3D images (18.48 vs. 13.31 respectively, p <0.001).

Conclusion:

Deep learning reconstruction improves image quality in abdominal CT.

Limitations:

A single-centre study.

Ethics committee approval

IRB approval National Institutes of Health/USA.

Funding:

No funding was received for this work.

4
RPS 1805 - High resolution T2-weighted MRI of the abdomen using deep learning reconstruction

RPS 1805 - High resolution T2-weighted MRI of the abdomen using deep learning reconstruction

06:18S. Funayama, Yamanashi / JP

Purpose:

To evaluate the feasibility of a deep learning-based reconstruction technique (DLRecon) in clinical abdominal MRI with both standard and high-resolution short scan protocols.

Methods and materials:

This study included 23 patients who underwent abdominal MRI. Each patient underwent 3 types of respiratory-triggered T2 weighted fast spin-echo (Discovery MR750 3.0T; GE Healthcare) with the following parameters: standard T2WI (std-T2WI, matrix, 320x192; thickness, 5 mm; NEX, 2), high-resolution T2WI (HR-T2WI, NEX, 1; matrix, 452x192), and super high-resolution T2WI (sHR-T2WI, NEX, 1; matrix, 452x192; thickness, 2.5 mm). The acquired data was reconstructed with and without DLRecon. DLRecon is a new deep learning-based MR reconstruction which comprises of a deep convolutional residual encoder network trained using a database of over 10,000 images to achieve images with high SNR and high spatial resolution.

The depiction of anatomical details of the pancreas and the liver, motion artefact, blurring, and overall quality were assessed to test the differences between those with and without DLRecon. The signal-to-noise ratio (SNR) of liver parenchyma and the spleen-to-liver signal intensity ratios were also calculated.

Results:

The depiction of the anatomical details in the pancreas, blurring, and overall quality were improved with DLRecon. The SNR with DLRecon (std-T2WI, 9.99±3.94) was significantly higher than without DLRecon (9.08±2.9, p=0.01). Contrast between the liver and spleen was unaltered, with statistically equivalency at a threshold of 0.2 (std-T2WI, p<0.0001). All the above result trends were the same for HR-T2WI and sHR-T2WI.

Conclusion:

The DLRecon provided improved SNR and less burring in abdominal T2WI compared to standard reconstruction for standard and high-resolution protocols.

Limitations:

A retrospective study and a small number of patients.

Ethics committee approval

This study was approved by the institutional review board.

Funding:

No funding was received for this work.

5
RPS 1805 - Influence of a novel deep learning noise reduction technology on filtered back-projected CT images in comparison to iterative reconstruction

RPS 1805 - Influence of a novel deep learning noise reduction technology on filtered back-projected CT images in comparison to iterative reconstruction

07:28A. Steuwe, Düsseldorf / DE

Purpose:

The software PixelShine (AlgoMedica, Germany) advertises with high-quality images even for low-dose CT acquisitions by reducing noise with deep learning algorithms while maintaining image information. This study aimed at an objective and subjective analysis of the image quality of processed datasets.

Methods and materials:

This IRB-approved retrospective study included 27 patients (19 male) who received low-dose abdominal CT (Somatom Definition Flash, Siemens Healthineers) between November 2014 and February 2016. Images were reconstructed with filtered back-projection (FBP, B30f) and iterative reconstruction (IR, I30f, level 3, SAFIRE). Subsequently, FBP-images were post-processed using PixelShine (B30f-PS, PixelShine version 1.2.104, sharpening level 2, noise level 14, processing strength A8, soft kernel settings). CT numbers (mean and standard deviation (noise)) in 6 ROIs (background, paravertebral muscle, fat, spleen, liver, and bladder) and subjective image quality were compared for the different datasets.

Results:

Image noise reduced significantly for B30f-PS-datasets compared to B30f- and I30f-datasets (-38 to -50% and -12 to -30% for soft tissues, respectively). CT numbers in liver, spleen, bladder, and fat were constant for all datasets, whereas significant differences were notable for background (B30f-PS vs B30f, and B30f vs I30f) and muscle (B30f-PS vs B30f, and B30f-PS vs I30f). In general, PixelShine improved image quality of B30f datasets considerably. Compared to I30f datasets, liver tissue looked more homogeneous, confirming a lower noise level. Beam-hardening artefacts were neither reduced nor enhanced.

Conclusion:

PixelShine reduces noise while maintaining image information with its deep learning algorithm. Especially for older CT scanners, where IR is not available, PixelShine allows to increase image quality. For new scanners, PixelShine allows to reduce patient dose while maintaining image quality.

Limitations:

n/a

Ethics committee approval

IRB-approved retrospective study.

Funding:

No funding was received for this work.

6
RPS 1805 - Objective and qualitative IQ analyses of deep learning image reconstruction in multiphasic CT imaging of the liver: a patient and phantom study

RPS 1805 - Objective and qualitative IQ analyses of deep learning image reconstruction in multiphasic CT imaging of the liver: a patient and phantom study

05:58F. Legou, Creteil / FR

Purpose:

To evaluate the clinical benefits on image quality (IQ) of TrueFidelity, a deep learning image reconstruction (DLIR), in multiphasic liver CT compared to adaptive statistical iterative reconstruction V (ASIRV) in patients and on a phantom.

Methods and materials:

66 patients underwent a multiphasic liver CT during a 1-month-period. The protocol was tailored according to patient morphology: 80 kV for a body mass index (BMI) <25; 100 kV for 25<BMI<30, and 120 kV for BMI>30. IQ of patient images reconstructed with DLIR and ASIRV50 was assessed on portal phase by measuring liver parenchyma contrast and signal-to-noise ratios (CNR, SNR), and qualitatively by two radiologists using a 5-point Likert-scale. Phantom images acquired with similar protocols were evaluated by computing the noise power spectrum and the task-based modulation transfer function (MTFtask).

Results:

Compared to ASIRV50, CNR and SNR were significantly improved with DLIR by 71% and 56% at 80, 83% and 77% at 100, and 61% and 60% at 120 kV, respectively (p<0.01 or less). Qualitative IQ was also improved with DLIR for each patient morphology (p<0.0001). On the phantom, compared to ASIRV50, DLIR reduced the noise magnitude by 31% at 80, 100, and 120kV while maintaining a coarser texture (noise mean frequency was 0.22 vs 0.21, 0.23 vs 0.21, and 0.25 vs 0.22 mm-1, respectively). DLIR improved the MTFtask at 80, 100, and 120kV (spatial frequency at which the MTFtask reduces to 50% was 0.43 vs 0.36, 0.43 vs 0.39, and 0.41 vs 0.37 mm-1, respectively).

Conclusion:

Compared to ASIRV50, DLIR improves the IQ of liver CT by reducing image noise without smoothing texture and by improving spatial resolution.

Limitations:

IQ of the arterial phases haven’t been assessed by the time of the submission. Results under completion.

Ethics committee approval

SFR approval.

Funding:

No funding was received for this work.

7
RPS 1805 - Artificial intelligence for image-quality control of chest radiographs

RPS 1805 - Artificial intelligence for image-quality control of chest radiographs

05:31K. Nousiainen, Helsinki / FI

Purpose:

To develop artificial intelligence for evaluating chest radiograph image-quality.

Methods and materials:

We considered 3 different features of the image quality: inclusion, rotation, and inspiration. The inclusion was further divided into 4 edges: sin, dex, top, and bottom. The data comprised of 2,019 posteroanterior chest radiographs in an upright position. We annotated the images based on the European Commission’s guidelines on quality criteria for diagnostic radiographic images. The inclusion criteria were divided into three classes: too tight, correct, and too wide. The rotation and inspiratory were divided into two classes: ok and not ok. We increased the amount of the image data for the inclusion by cropping the correct images to meet the too-tight criteria and for the inclusion and the rotation by flipping the images horizontally. The image histograms were equalized and the images were resized to a resolution of 512x512 pixels. Approximately 100 and 200 images were extracted for validation and test data, respectively. We trained ResNet50 and DenseNet121 networks with the remaining images.

Results:

Resnet50 and Densenet121 both performed accurately for the inclusion detection in sin- and dex-edges. Densenet121 performed better for the inclusion detection in top- and bottom-edges as well as in the patient rotation and inspiration detection. The AUC was > 0.92 for the inclusion detection in all four edges in three classes. The AUC was > 0.71 and > 0.89 for the rotation and the inspiration, respectively.

Conclusion:

Artificial intelligence can be used in a clinical setting as instant feedback of the chest radiograph image-quality. Additionally, the trained networks provide a tool for long-term quality control of a radiography unit.

Limitations:

Data from only one centre was used.

Ethics committee approval

n/a

Funding:

No funding was received for this work.

8
RPS 1805 - Deep learning-based reduction of moving CT metal artefacts

RPS 1805 - Deep learning-based reduction of moving CT metal artefacts

06:05A. Saalbach, Hamburg / DE

Purpose:

Non-static metal implants like pacemakers frequently lead to heavy streak-shaped artefacts in reconstructed CT image volumes. The reliable evaluation of neighbouring anatomy, for instance with regard to inflammation or calcification, might thereby be limited. Furthermore, motion precludes application of standard second pass metal artefact reduction (MAR) methods, which implicitly assume a static object during CT acquisition. We propose a MAR pipeline which is robust regarding motion and applicable on a wide range of scanner types, acquisition modes, and contrast protocols.

Methods and materials:

The MAR pipeline uses raw projection data and is therefore independent of 3D motion blur. It is comprised of three convolutional neural network ensembles which are trained from scratch.

First, SegmentationNets identify metal-affected line integrals in the input raw projection data. Second, values within the predicted metal shadow are treated as missing data and refilled based on surrounding line integrals by means of the InpaintingNets. The CT volume without metal is obtained by filtered backprojection of the inpainted sinogram. Finally, the ReinsertionNets determine metal positions in the image domain based on the segmented metal shadow.

Results:

The data for supervised learning is generated by introducing synthetic metal implants into the projection data of 14 metal-free clinical cases with desired acquisition settings. A pacemaker lead model ensures sensible insertion positions, pathways, and motion trajectories by taking the cardiac anatomy and concomitant ECG-data into account.

Conclusion:

The fully automatic pipeline is tested on 9 clinical cases with real pacemakers, whereby ECG-gated, as well as ungated contrast-enhanced CT, scan types are included. Significant metal artefact reduction is achieved.

Limitations:

While empirical evaluation was restricted to pacemaker leads, applicability to other metal implants is very likely.

Ethics committee approval

n/a

Funding:

No funding was received for this work.

9
RPS 1805 - Evaluation of automated quality control of multicentre clinical trial CT data using spine localisation based on a machine learning method

RPS 1805 - Evaluation of automated quality control of multicentre clinical trial CT data using spine localisation based on a machine learning method

06:45S. Lee, London / UK

Purpose:

Consistent image quality across multiple centres in clinical trials is crucial. The quality control is typically done by centralising data followed by manual checks prior to radiological review. For evaluating the feasibility of automating the QC process, we developed a software tool, Automatic Visual Quality Control (AutoVQC), that automatically locates the spine, calculates the anatomical field of view (FOV), and detects imaging artefacts using machine learning. AutoVQC was applied to in-house CT data from a multicentre clinical trial.

Methods and materials:

The quality of CT scans (in total, 459 series from 62 subjects acquired at 7 sites) was evaluated using AutoVQC, which recognises the anatomical FOV of the images using random forests and detects artefacts such as missing slices. AutoVQC returns two values between 0 (foot) and 1 (top of the head) indicating the FOV in the canonical body axis and the outcome: pass or artefacts. The QC outcome was reviewed and classified as true-positive, true-negative, false-positive, and false-negative, where positive and negative indicate QC pass and artefacts, respectively.

Results:

AutoVQC returned the spine localisation result within one minute per series. The result was reviewed for sufficient data quality for subsequent analyses and accurate corresponding canonical body axis range. From all 459 series evaluated, accuracy 0.93, precision 0.89, recall 0.98, and specificity 0.87 were achieved. The causes of artefacts included missing slices, non-axial image orientation, poor image quality, and the presence of multiple series in one single folder.

Conclusion:

The initial evaluation of AutoVQC on multicentre study data demonstrated the potential of incorporating it into clinical trial image data management/analysis workflow for quality control of images based on automated spine localisation.

Limitations:

A medium sample size.

Ethics committee approval

n/a

Funding:

This work was supported by GSK.

10
RPS 1805 - Deep learning reconstruction in ultra-low-dose abdominal CT: comparison with hybrid-iterative reconstruction

RPS 1805 - Deep learning reconstruction in ultra-low-dose abdominal CT: comparison with hybrid-iterative reconstruction

06:20P. Rogalla, Toronto / CA

Purpose:

To evaluate whether deep learning reconstruction (DLR) based on convolutional neural network provides superior image quality in ultra-low-dose abdominal CT compared to the hybrid-iterative reconstruction method.

Methods and materials:

62 patients underwent a CT of the abdomen (135kV, 20-40mA weight-based, 0.5s rotation time, 0.5*80 detector rows, 1.0 mSv reference dose). Four series were reconstructed with 0.5 mm slice thickness: (A) hybrid-iterative IR (AIDR), (B) DLR (combined), (C) DLR (sharp), and (D) DLR (smooth). All images were presented as 3 mm thick slices, 4 on 1 on a 4k monitor in random order and without image annotation. Using forced ranking, 2 readers evaluated the series in the categories of conspicuity, noise texture, low/high contrast detectability, artefacts, and overall appeal. The readers also graded image quality on a Likert scale (1=excellent, 10=low quality). Inter-reader agreement was calculated for all categories. Image noise (SD) was measured in external and intra-abdominal air and liver.

Results:

DLR series (B) was generally preferred (rank 1) over all other reconstructions by both readers in all patients and in all categories (p<0.001). The overall mean rank of series (A)/(B)/(C)/(D) were 4.0/2.47/1.10/2.46 and the Likert scores were 6.9/3.4/1.97/3.6, respectively. Inter-reader agreement for forced ranking was k=0.80 and for the Likert score was k=0.78. SDs were 17.23/8.31/7.04/6.32 for external air, 21.5/10.8/10.3/8.3 for intra-abdominal air, and 23.1/9.9/9.3/7.25 for liver tissue, respectively, all p<0.0001 except in group B and C for intra-abdominal air and liver.

Conclusion:

Deep learning reconstruction provides superior subjective image quality in ultra-low-dose abdominal CT compared to the current standard-of-care iterative method.

Limitations:

The number of readers (2) and patients (62).

Ethics committee approval

Ethics approval was obtained.

Funding:

No funding was received for this work.

PEP Subscription Required

This course is only accessible for ESR Premium Education Package subscribers.