Research Presentation Session: Artificial Intelligence & Machine Learning & Imaging Informatics

RPS 1105 - Novel AI models redefining radiology diagnostics

February 29, 16:00 - 17:30 CET

7 min
The use of large language models for first triage decisions for patients at risk for reaction during intravenous contrast administration: a proof of concept
Miriam Dolciami, Rome / Italy
    Author Block: G. Avesani1, M. Marin2, M. Dolciami1, L. D'Erme1, A. Perazzolo1, L. Russo1, V. Celli1, B. Gui1, E. Sala1; 1Rome/IT, 2Gravedona/ITPurpose: The aim of this study was to determine whether a large language model (GPT
  1. 5) can provide accurate and valuable guidance on management for patients at risk for reaction during intravenous contrast administration.
  2. Methods or Background: Six guidelines from various scientific societies were collected, both in English and the local language. These documents were embedded using OpenAI embeddings within the LangChain framework, creating a database to provide information to a GPT-
  3. 5 turbo model. We formulated 100 clinical scenarios describing different situations, combining allergic and renal problems (e.g., moderate to severe allergic reaction and different renal functions) and different types of contrast media (iodine and gadolinium). We asked the model to give a textual answer for each clinical scenario indicating the correct patient management following the previously given guidelines. The responses generated by the model were evaluated by a human expert in the field, considering formal correctness and clinical usefulness. A Likert 5-point scale for each task (correctness and usefulness) was used to judge the answers (from 5 = correct/safe or very useful to 1 = completely wrong or completely useless for clinical purposes). We dichotomised the responses with a cut-off of ≥4 to consider the answers acceptable.
  4. Results or Findings: The model's answers were judged formally correct and safe for patients in 95% of scenarios and valuable in 84% of cases. Predominantly, answers deemed not valid were considered too vague to be used.Conclusion: LLMs have the potential to aid in the clinical management of critical patients. Such models can be very useful for novice personnel or initial screenings. Better performance might be achieved with fine-tuning and the "tree of thought" techniques.Limitations: There was limited prompt engineering and fine tuning.Funding for this study: No funding was received for this study.Has your study been approved by an ethics committee? Not applicableEthics committee - additional information: No patients data were used.
7 min
Enhancing quick-acquired MRI scans with the DL-based Aikenist framework: a clinical assessment
Bhanu K. N. Prakash, Singapore / Singapore
    Author Block: C. S. Arvind1, S. S. Bhat2, B. Dikendra3, S. Z. T. Jordan1, A. Amrapuram2, B. K. N. Prakash1; 1Singapore/SG, 2Bangalore/IN, 3Chennai/INPurpose: MRI remains a cornerstone for clinical diagnostics and research. High-resolution MRI, though detailed, necessitates extended acquisition, increasing patient discomfort and risk of motion artifacts. A frequently used alternative is the quick-acquisition technique, but it tends to compromise image quality due to noise and diminished contrast.Methods or Background: This study employed a DL-based Aikenist post-processing enhancement on QuickScan-acquired MR scans from 30 brain scans (5400 slices) and 32 abdomen subjects (1920 slices). The brain and abdomen data were acquired from different MRI scanners (GE, Siemens, Toshiba) at different locations using different acquisition protocols, introducing scanner variability.Results or Findings: Our results highlighted significant improvements in image quality metrics, even accounting for scanner variability.For brain scans, the average SNR surged from
  1. 44 to 42.92 (p <0.001) and CNR from 11.88 to 18.03 (p <0.001). Meanwhile, abdominal scans experienced an SNR leap from 35.30 to 50.24 (p <0.001) and an impressive CNR ascent from 10,290.93 to 93,767.22 (p <0.001).
  2. Furthermore, in a double-blinded evaluation, clinicians emphasised the enhanced visibility of intricate anatomical structures, intrastructural changes, such as the IMAT, muscle boundaries, tissue-tissue interface, brain structural delineation, and improved bias field correction, which were previously not accentuated. Their feedback solidified the clinical importance of our enhancement, particularly indiscerning smaller regions earlier concealed by noise and reduced contrast.Conclusion: Aikenist enhancement doesn't merely elevate MRI image aesthetics but offers a robust solution to ensuring diagnostic accuracy without extending scan times. As MRI scans become more integral to healthcare, innovations like this pave the way for a more patient-centred and efficient imaging process.Limitations: Results, though promising, stem from specific anatomies and scanner types. Efficacy may fluctuate with varied MRI parameters.Funding for this study: No funding was received for this study.Has your study been approved by an ethics committee? YesEthics committee - additional information: The IRB approved this study.
7 min
Frameworks for artificial intelligence research in medical image analyses: a systematic review
Manjunath Kanabagatte Nanjundappa, Manipal / India
Author Block: M. Kanabagatte Nanjundappa1, V. Kulkarni2, A. Kulkarni3, Y. M4, C. Maram5; 1Manipal/IN, 2Leesburg, VA/US, 3Bengaluru/IN, 4Karlsruhe/DE, 5Hyderabad/IN
Purpose: Artificial intelligence (AI) has a strong footprint in radiology workflow, from image acquisition to reporting findings. This review attempts to overview such AI frameworks in medical image analyses (in diagnostics and therapeutics) from a biomedical engineering perspective.
Methods or Background: Several AI, machine learning (ML), and deep learning (DL) frameworks have been developed by academic research institutes and healthcare companies that are available as open-source software frameworks. Commercially available and community-based DL frameworks are reviewed. The frameworks were compared according to various parameters such as the technology used, CPU/GPU-based implementation, feature learning time, performance evaluation, whether they are desktop installations or cloud-based applications to work with and deployment type (commercial grade with production code or research prototype) and clinical validation.
Results or Findings: More than a hundred open-source DL frameworks are available. A few have done exceptionally well in computer-aided diagnosis systems, such as Microsoft InnerEye, NVidia CLARA, pyRadiomics, and MONAI. Regulatory body approvals and clinical validations are pending in many reviewed products.
Conclusion: This review paper helps the researchers, radiology residents, and radiologists to gain insight into these frameworks and libraries and select the right one for fast prototype development for image analysis in radiology applications.
Limitations: We could not evaluate all the AI frameworks as they have vast applications in many imaging modalities for diagnosis and therapy, and also, most of them are clinically not fully validated to accept them as clinical solution.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: This is a review and hence ethical issues did not arise.
7 min
Energy-optimised scheduling of CT examinations through mathematical modelling
Martin Segeroth, Basel / Switzerland
    Author Block: M. Segeroth1, A. Nurkanović2, J. Vosshenrich1, M. Diehl2, T. Heye1; 1Basel/CH, 2Freiburg/DEPurpose: Radiology departments and medical imaging devices in particular are major energy consumers within a hospital. The aim of this study was to calculate possible energy savings by optimally scheduling CT examinations.Methods or Background: Data of all CT examinations performed in our tertiary care radiology department in 2015 on three CT scanners was retrospectively included. Data consisted of examination timestamps and power consumption in kilowatts. The optimal scheduling problem was formulated using linear constraints, a linear objective function and only binary decision variables as an integer linear programming (ILP) problem. This formulation allowed rigorous modelling of a nonconvex and nondifferentiable objective function and offers the possibility to compute the optimal solution even for very large models.Results or Findings: In total 261 workdays were analysed, with 15’072 CT examinations scheduled on the three CT scanners. The duration to solve the ILP for every workday was
  1. 14 s (9.46-10.80 s). The model yielded a 34.9% reduction in the scanners' combined daily energy consumption through optimal examination scheduling. In absolute values, daily energy consumption could be decreased by 42.2 kWh from 121.0 kWh (120.6-121.4 kWh) to 78.8 kWh (77.8-79.8 kWh; P <.001). Energy savings are primarily attributed to examination shifting, allowing for increased system off times. Overall, 10,930.6 kWh in energy, $2,864 in cost, and 1,399.1 kgCO2eq in carbon emissions could theoretically be saved in our setting.
  2. Conclusion: Optimised CT examination scheduling through automatic modelling has substantial sustainability and cost benefits for radiology departments. Feasibility of model implementation in clinical routine needs further investigation.Limitations: Although it was ensured that the approach was optimised, this study requires extensive clinical testing.Funding for this study: No funding was received for this study.Has your study been approved by an ethics committee? YesEthics committee - additional information: This study was approved by the ethics committee of northwest and central Switzerland, project ID 2022-
7 min
medBERT.de: a German BERT model tailored for the medical domain: insights into the results of radiological text classification and entity recognition
Felix Busch, Munich / Germany
    Author Block: F. Busch1, J-M. Papaioannou1, P. Grundmann1, F. Borchert2, L. C. Adams3, L. Xu1, M. Makowski3, A. Löser1, K. K. Bressem1; 1Berlin/DE, 2Potsdam/DE, 3Munich/DEPurpose: We developed medBERT.de, a German BERT (Bidirectional Encoder Representations from Transformers) model for the medical domain trained on
  1. 7 million German medical documents. Here, we present the results of our custom pretrained BERT models for classification and entity recognition tasks from radiology reports.
  2. Methods or Background: medBERT.de is built on the standard BERT architecture, featuring 12 layers with 768 hidden units each, 8 attention heads, and a 512-token input limit. Three distinct radiological benchmarks based on 2,000 radiology reports, respectively, obtained from a level 1 hospital in Germany, were developed to span various report lengths and tasks: a short text classification from chest x-rays, a longer report classification from chest CT examinations, and a named entity recognition (NER) task from medium-sized CT/x-ray reports of the wrist. Reports were manually labelled by radiologists and medical students for various pathologies and therapeutic devices. The model and benchmarks were made publicly available (https://huggingface.co/GerMedBERT/medbert-512).Results or Findings: medBERT.de displayed superior performance for the chest x-ray (AUROC:
  3. 65) and CT classification (AUROC: 96.69) tasks compared to previously published German BERT models. For the NER task, the model trained with deduplicated data achieved the highest AUROC of 83.28. Notably, medBERT.de's performance on longer texts from CT reports (258 ± 100 words) was especially pronounced compared to x-ray (98 ± 27 words) or NER (108 ± 41 words) tasks.
  4. Conclusion: The study underscores the potential of domain-specific BERT models in efficiently processing radiology reports. Their ability to handle varying report lengths with remarkable accuracy makes them promising tools for radiological applications.Limitations: medBERT.de is primarily based on data from radiology reports. The origin of the data from a single university hospital could lead to bias.Funding for this study: No funding was received for this study.Has your study been approved by an ethics committee? YesEthics committee - additional information: This study was approved by an ethics committee; IRB-approval number: EA2/078/
7 min
Image fusion using pixelwise gradient model for image fusion (PGMIF)
Ka-Hei Cheng, Hong Kong / Hong Kong SAR China
    Author Block: K-H. Cheng, J. Cai; Hong Kong/HKPurpose: Magnetic resonance imaging (MRI) plays a pivotal role in the accurate delineation of tumours for radiotherapy. However, conventional MRI sequences often show inconsistencies in tumour contrast across patients. This study aimed to assess the potential of a novel multimodal image fusion method, the pixelwise gradient Model for Image Fusion (PGMIF), to improve MRI tumour contrast and its consistency across patients.Methods or Background: We utilised T1-w and T2-w MR images from a cohort of 80 patients. The proposed PGMIF was based on a pixelwise gradient to capture the shape of the input images and a Generative Adversarial Network (GAN) term for capturing image contrast. It was compared with other fusion algorithms: gradient model with maximum comparison among images (GMMCI), deep learning model with weighted loss (DLMWL), pixelwise weighted average (PWA), and maximum of images (MoI). Two metrics were used to test the fusion methods' performance: tumour contrast-to-noise ratio (CNR) and a refined Sobel operator analysis to measure the edge sharpness.Results or Findings: PGMIF surpassed in both metrics, registering a CNR of
  1. 237 ± 0.100. This marked a significant enhancement compared to T1-w (0.976 ± 0.052) and T2-w MR images (1.077 ± 0.087). PGMIF also outperformed other models including GMMCI, DLMWL, PWA, and MoI. In the Sobel operator analysis, PGMIF again showed the highest Sigmoid of Sobel Metric values for T1-w and T2-w MR images comparisons, demonstrating the contrast amplification and edge acuity.
  2. Conclusion: The novel PGMIF method shows its potential to enhance MRI tumour contrast while retaining the anatomical structures from the source images. Its implementation could be useful in NPC tumour delineation.Limitations: The ripples of the input images may be amplified in the fused images.Funding for this study: This research was partly supported by research grants of Mainland-Hong Kong Joint Funding Scheme (MHKJFS) (MHP/005/20), Project of Strategic Importance Fund (P0035421) and Projects of RISA (P0043001) from The Hong Kong Polytechnic University of The Hong Kong Polytechnic University, Shenzhen Basic Research Program (JCYJ20210324130209023) of Shenzhen Science and Technology Innovation Committee, and Health and Medical Research Fund (HMRF 09200576), the Health Bureau, The Government of the Hong Kong Special Administrative Region.Has your study been approved by an ethics committee? YesEthics committee - additional information: The data set used was approved by the Research Ethics Committee in Hong Kong (Kowloon Central/Kowloon East, reference number: KC/KE-19-0085/ER-1).
7 min
Recurrence-free survival prediction in head and neck cancers using deep learning: a multicentre, multimodal approach harnessing uncertainty estimation and counterfactual explainability
Zohaib Salahuddin, Maastricht / Netherlands
    Author Block: Z. Salahuddin, H. C. Woodruff, Y. Chen, X. Zhong, P. Lambin; Maastricht/NLPurpose: This study aims to develop an end-to-end trustworthy deep learning model for predicting recurrence-free survival (RFS) in head and neck cancers, utilising FDG-PET and CT images and automated delineations, with a focus on increasing confidence and explainability through uncertainty predictions and counterfactual image generation.Methods or Background: Given the prevalence and severity of head and neck cancers worldwide, an algorithm capable of accurately predicting RFS could significantly enhance therapeutic planning and patient management. The developed adaptive 3D resnet-50 deep learning model was trained on multimodal data (clinical data, FDG-PET, and CT images) using a multi-task logistic regression framework. Fivefold cross-validation was performed on 378 patients from 5 different centres, and 111 patients from 2 different centers were used as an external test set. Automated delineations of tumour and lymph nodes were obtained via a modified nnUNet. The model utilised a multi-head multi-loss function to estimate prediction uncertainty and employed a VAE-GAN for latent space traversal, generating counterfactual images to explore and visualise hypothetical scenarios and enhance explainability.Results or Findings: The model demonstrated a competitive c-index of
  1. 681 [95% CI: 0.663 - 0.694] in fivefold cross-validation and 0.671 on two external test sets. Predictions with lower uncertainty are correlated with superior performance, evidenced by a c-index of 0.683. Kaplan-Meier curve demonstrated a significant split between low and high-risk groups. Counterfactuals revealed that both shape and texture features from FDG-PET and CT images are important for predicting survival.
  2. Conclusion: The developed model exhibits promising potential in providing trustworthy and interpretable RFS predictions for H&N cancer patients, leveraging multicentre multimodal data, uncertainty estimates, and counterfactual explainability.Limitations: The model necessitates prospective validation, and conducting an in-silico trial is imperative to assess the clinical efficacy of the counterfactuals and uncertainty predictions.Funding for this study: Funding for thist study was received from EuCanImage n°
  3. Has your study been approved by an ethics committee? YesEthics committee - additional information: Institutional Review Boards of all participating PROVIDER institutions permitted use of images and clinical data, either fully anonymised or coded, from all cases for research purposes only. Retrospective analyses were performed in accordance with the relevant guidelines and regulations as approved by the respective institutional ethical committees with protocol numbers: MM-JGH-CR15-50 (HGJ, CHUS, HMR, CHUM) and CER-VD 2018-01513 (CHUV). For CHUP, institutional review board approval was waived as all patients signed informed consent for use of their data for research purposes at diagnosis. For MDA, ethics approval was obtained from the University of Texas MD Anderson Cancer Center Institutional Review Board with protocol number: RCR03-
  4. For USZ, ethics approval was related to the clinical trial NCT01435252 entitled "A phase II study in patients with advanced head and neck cancer of standard chemoradiation and add-on Cetuximab". For CHB, the fully anonymised data originates from patients who consent to the use of their data for research purposes. List of PROVIDERS: HGJ: Hôpital Général Juif, Montréal, CA; CHUS: Centre Hospitalier Universitaire de Sherbrooke, Sherbrooke, CA; HMR: Hôpital Maisonneuve-Rosemont, Montréal, CA; CHUM: Centre Hospitalier de l’Université de Montréal, Montréal, CA; CHUV: Centre Hospitalier Universitaire Vaudois, CH; CHUP: Centre Hospitalier Universitaire de Poitiers, FR; MDA: MD Anderson Cancer Center, Houston, Texas, USA; USZ: UniversitätsSpital Zürich, CH; CHB: Centre Henri Becquerel, Rouen, FR.
7 min
A unified transformer-based model for characterisation and diagnosis of focal liver lesions on multiparametric MRI images
Zhehan Shen, Shanghai / China
    Author Block: Z. Shen, F. Yan; Shanghai/CNPurpose: The aim of this study was to develop and evaluate an AI model that incorporates liver imaging reporting and data system (LI-RADS) criteria and other relevant radiological features for the diagnosis of FLLs using AMRI data. Apart from that, we aimed to compare the performance of the AI model and radiologists with different levels of experience in the internal and external test set.Methods or Background: We retrospectively collected MRI data from 1,024 patients with 1,147 FLLs who underwent contrast enhanced abdominal MRI. The FLLs were classified into six categories: hepatocellular carcinoma (HCC), intrahepatic cholangiocarcinoma (ICC), metastasis, cyst, haemangioma and focal nodular hyperplasia (FNH). We trained the AI model on MRI images of 560 FLLs from January 2020 to July 2022 in the training set. We evaluated the performance of the AI model using the internal test set (243 FLLs from July 2022 to August 2023) and the external test set (344 FLLs from public data set). We used the DeLong method and the McNemar test to compare the performance of the AI model and two radiologists with different levels of experience.Results or Findings: The AI model achieved an overall accuracy of
  1. 87, a sensitivity of 0.85, a specificity of 0.91, and an area under the receiver operating characteristic curve (AUC) of 0.92 for FLL diagnosis on the internal test set. The AI model outperformed junior radiologists in terms of accuracy, sensitivity, specificity and AUC except one senior radiologist had similar accuracy and specificity but lower sensitivity. The AI model also showed good generalisation ability across different centres, with an AUC of 0.90 for external public data set.
  2. Conclusion: The proposed AI model based on radiological characteristics can effectively diagnose FLLs using MRI data and can assist radiologists in improving their diagnostic performance and efficiency.Limitations: No limitations were identified.Funding for this study: No funding was received for this study.Has your study been approved by an ethics committee? YesEthics committee - additional information: No information provided by the submitter.
7 min
Establishing robust ground truth labels to create machine learning response assessment models using an innovative fusion technique of rectal MRI and whole mount histopathologic specimens
Josip Ninčević, New York / United States
    Author Block: N. Horvat, J. M. Santos, J. Ninčević, C. Firat, J. Heiselman, J. Chakraborty, J. Shia, J. Garcia-Aguilar, M. J. Gollub; New York, NY/USPurpose: MRI-based radiomics is a promising objective tool for predicting rectal cancer (RC) treatment response but lacks generalisability. Whole mount histology (WMH) is considered the gold standard reference method for point-by-point comparison, and radiomic models have yet to be trained using WMH to assess a rigid point-based registration method to evaluate the fusion of the rectum between WMH and MRI accuracy.Methods or Background: The study included 18 consecutive rectal cancer patients on neoadjuvant therapy and total mesorectal excision from 2018 to
  1. A multimodal radiology-pathology image registration workflow was developed. First, a radiologist and pathologist delineated the tumour bed, internal and external rectal borders, and eight corresponding MR and WMH image landmarks. Second, automated rescaling computed point-based registration of images via delineated landmarks was performed. Third, initial rigid alignment of MR and WMH images accounted for rectal distension differences using biomechanically constrained plane strain elastic deformable registration. Fourth, a combination of in-house rigid registration, active contours, and finite element software performed image registration. Fifth, a 3D slicer rendered outputs from the multimodal image fusion system to obtain accurate and precise visualisation.
  2. Results or Findings: Dice overlap and modified Hausdorff distance of the delineated MR and pathology images showed a significantly good correlation between external and internal border segmentations (P-values ˂.05, comparing in each case mean values averaged across the mean values from each of the three levels per case).Conclusion: Deformable registration significantly improves the internal and external contour agreement over rigid point-based registration. Establishing such a method will allow the generation of ground truth labels to predict complete response and improve patient care by safely avoiding surgery.Limitations: The limitations of the study are its retrospective design and small sample.Funding for this study: This project was partly supported by the National Cancer Institute Cancer Center Core Grant P30 CA008748 and the Society of MSK (PI: Natally Horvat). The RSNA Research & Education Foundation supported the project described through grant number RSD2302 (PI: Natally Horvat). The content is solely the authors' responsibility and does not necessarily represent the official views of the RSNA R&E Foundation.Has your study been approved by an ethics committee? YesEthics committee - additional information: This study was approved by the institutional review board, with a waiver for written informed consent, and was compliant with the Health and Insurance Portability and Accountability Act.
7 min
External validation of Alzheimer's disease machine-learning models: generalisability and clinical features
Helena Rico Pereira, Lisbon / Portugal
    Author Block: H. R. Pereira1, V. Sá Diogo1, D. Prata2, H. Alexandre Ferreira1; 1Lisbon/PT, 2London/UKPurpose: Recent studies have shown the potential of machine-learning models, based on magnetic resonance imaging (MRI) features, in aiding the diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, these models usually lack generalisability: most use data from the same public data sets, are trained with curated patient and healthy subject data and are not validated with independent "real-world" data.Methods or Background: We aimed to validate in clinical practice our previously developed models (derived also from public data sets): model 1 – cognitively normal (CN) vs AD, showing a balanced accuracy (BAC) of
  1. 6%, sensitivity of 91.5%, and specificity of 89.7%; and model 2 - CN vs MCI vs AD, showing a BAC of 62.1% in the multiclassification. Additionally, we explored the features of the misclassified cases. MRI T1-weighted MPRAGE morphometric data (computed with freesurfer 7.1.1) were used from Portuguese hospital patients, comprising 8 AD, 9 MCI and 21 CN (19 headache and 2 depression) patients.
  2. Results or Findings: Our model 1 showed a BAC of
  3. 6%, sensitivity of 100.0%, and specificity of 95.2%, whilst misclassifying one CN as an AD patient. Model 2, on the other hand, showed a BAC of 65.8% (7 CN misclassified as 5 MCI and 2 AD; 5 MCI as 3 CN and 2 AD; 1 AD as MCI). Misclassified MCI patients showed volume changes in brain regions similar to those found in AD (amygdalar and temporal atrophy) or CN (hippocampal sparing), and the opposite was observed for the misclassified AD as MCI patient (entorhinal atrophy only).
  4. Conclusion: The results suggest that our models may be of clinical use, provided that physicians frame the classification output within anamneses and clinical findings.Limitations: The study's main limitation was the small data set tested.Funding for this study: This work was financially supported by Fundação para a Ciência e Tecnologia (FCT) under the projects UIDB/00645/2020, SAICTPAC/0010/2015 and DSAIPA/DS/0065/
  5. FCT has further supported HRP through the individual PhD grant 2021.08306.BD, and DP through 2022.00586.CEECIND.
  6. Has your study been approved by an ethics committee? YesEthics committee - additional information: All the data used in this study was approved by the ethics committee of each Portuguese hospital.
7 min
Towards safer imaging: a comparative study of deep learning-based denoising and iterative reconstruction in intraindividual low-dose CT scans using an in-vivo large animal model
Jonas Mück, Tübingen / Germany
    Author Block: J. Mück, B. Stenzl, J. Hofmann, S. Afat, A. S. Brendlin; Tübingen/DEPurpose: Computed tomography (CT) scans are a significant source of medically induced radiation exposure. Novel deep learning-based denoising (DLD) algorithms have been shown to enable diagnostic image quality at lower radiation doses than iterative reconstruction (IR) methods. However, most comparative studies employ low-dose simulations due to ethical constraints. We used real intraindividual animal scans to investigate the dose reduction capabilities of a DLD algorithm in comparison to IR.Methods or Background: Fourteen sedated pigs underwent two 100% CT scans on the same third generation dual-source scanner, with a two-month interval between each scan. Both times, we additionally reduced the mAs to 50%, 25%, 10%, and 5%. All scans were reconstructed using ADMIRE level 2 (IR2) and the DLD algorithm, resulting in a total of 280 data sets. Objective image quality measures (CT number stability, noise, and contrast-to-noise ratio) were assessed. Three radiologists independently evaluated subjective image quality, and interrater agreement was analysed using Spearman's correlation coefficient. Adequately corrected mixed-effects modeling analysed objective and subjective image quality.Results or Findings: Neither dose reduction nor reconstruction method negatively affected CT number stability (p>
  1. 999). In terms of objective image quality, the DLD algorithm achieved a 25% radiation dose while maintaining noise and contrast-to-noise ratio comparable to 100% IR2. Interrater agreement for subjective image quality ratings was strong (r≥ 0.69, mean 0.93±0.05, 95% CI 0.92-0.94; each p <0.001). Subjective assessments indicated that DLD at 25% radiation dose was comparable to 100% IR2 in terms of image quality, sharpness, and contrast (p≥ 0.281).
  2. Conclusion: The DLD algorithm can achieve image quality comparable to the standard IR method but with a significant dose reduction of up to 75%. This suggests a promising avenue for lowering patient radiation exposure without sacrificing diagnostic quality.Limitations: This was a single centre study with a specific hardware and software set-up.Funding for this study: No funding was received for this study.Has your study been approved by an ethics committee? YesEthics committee - additional information: This large animal study was approved by the Regional Council (C 01/19 G) and conducted following EU Directive No 2010/63/EU.

This session will not be streamed, nor will it be available on-demand!