Research Presentation Session: Artificial Intelligence & Machine Learning & Imaging Informatics

RPS 1005 - Harnessing AI for enhanced reporting and workflow

February 29, 14:00 - 15:30 CET

7 min
Does patient education level impact comprehension of radiology reports: can AI level the playing field?
Mohammed Bilal Aziz, Blackburn / United Kingdom
Author Block: M. B. Aziz1, R. Husam Al-Deen2, M. H. Chowdhury3, M. I. K. Inayat2, H. Ahmed2, B. Syed2, H. M. Khan2, A. Pervez2, S. Syed2; 1Blackburn/UK, 2London/UK, 3Basildon/UK
Purpose: This study investigates the impact of artificial intelligence in improving the comprehension of radiology reports for readers based on their educational background. Radiology reports serve as vital communication between healthcare professionals and patients, yet their comprehensibility can vary based on the patients' educational attainment.
Methods or Background: 40 musculoskeletal MRI reports, 20 each written by consultant radiologists and generative AI (ChatGPT), were evaluated by 10 participants across multiple educational backgrounds, from secondary education, university graduates, medical students to qualified doctors. AI-generated reports were produced from radiologist-authored reports with a standardised prompt for comprehension for the layman without removing detail. Comprehension was evaluated using a standardised metric in a Likert scale, and a comparison between the reports with comprehension by educational stratification was performed.
Results or Findings: Our study showed, for radiologist-authored reports, a combined understandability and readability average of 3.02 out of 5 (higher values indicate better comprehension) for secondary school participants, 3.08 for non-medical university graduates, 3.39 for medical students, and 3.62 for doctors: as the level of medically-related education increases, individuals achieved a higher mean for comprehension. When compared to AI-generated reports, the mean scores for comprehension were 3.95 for secondary school, 3.46 for university graduates, 3.71 for medical students, and 3.08 for doctors, demonstrating no pattern between the mean scores of individuals of varying educational levels – a level playing field.
Conclusion: AI-generated reports demonstrated better comprehension among recipients across the educational spectrum, highlighting the potential of AI to remove inaccessibility to conversations surrounding a patient's health and allow patients to make informed medical decisions.
Limitations: The limitation of the study is the sample size consisted of 10 individuals, requiring further research into the applicability of AI in enhancing patient access to radiology reports.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: Not considered research as per MRC decision tool.
7 min
ChatGPT-4 makes cardiac MRI reports easy to understand: a feasibility study
Babak Salam, Bonn / Germany
Author Block: B. Salam, D. Kravchenko, L. Weinhold, S. Nowak, A. M. Sprinkart, U. I. Attenberger, D. Kütting, J. A. Luetkens, A. Isaak; Bonn/DE
Purpose: This study aimed to evaluate the ability of Chatbot Generative Pre-trained Transformer 4 (ChatGPT-4) to transform cardiac MRI reports into comprehensible text for medical laypersons.
Methods or Background: ChatGPT-4 was used to generate three simplified versions of 20 various cardiac MRI reports using the same prompt (n=60). Two cardiovascular radiologists evaluated factual correctness, completeness of relevant findings, and serious misinformation with potential harm (total ratings, n=360), while medical laypersons evaluated the understandability of both versions (total ratings, n=200 and n=600, respectively) on a Likert scale (1 “strongly disagree”, 5 “strongly agree”). Readability grade level of reports were measured using the Automated Readability Index. Mann-Whitney U test and Intraclass Correlation Coefficient (ICC) were performed.
Results or Findings: ChatGPT-4 reports were generated on average in 52 sec (8–78 sec). The median reading grade levels of the ChatGPT-4 versions were significantly lower (10 [9-12] vs 5 [4-6]; p<.001) and easier to understand for laypersons than original reports (1 [1-1] vs 4 [4-5]; p<.001). Radiologists’ ratings of the ChatGPT-4 versions reached a median of 5 (5-5) for all three categories with “strong agreement” for factual correctness in 92% and completeness of relevant findings in 84% of the reports. Test-retest agreement for layperson understandability between the three simplified reports generated from the same original report was moderate (ICC: 0.54; p<.001). Interrater agreement between radiologists was high (ICC: 0.92; p<.001).
Conclusion: ChatGPT-4 can transform complex cardiac MRI reports into more understandable, layperson-friendly language without compromising factual correctness or completeness. That can help convey patient-relevant radiology information in an easy-to-understand manner.
Limitations: Exploratory study design. Relatively small sample size of medical laypersons. During the questionnaire completion process, participants may have experienced a learning effect as they read through the simplified reports, potentially influencing their subsequent assessment of understandability.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: Because of the use of fictitious, unidentifiable data, approval of an institutional review board was not required.
7 min
Cost-consequence analysis of artificial intelligence-assisted image reading in lung cancer screening
Harriet Louise Lancaster, Groningen / Netherlands
Author Block: H. L. Lancaster1, K. Togka1, X. Pan1, M. Silva2, D. Han1, M. Oudkerk1; 1Groningen/NL, 2Parma/IT
Purpose: This modelling study aimed to estimate clinical and costs-consequences of a hypothetical AI-assisted image reading solution in LCS in the Netherlands, compared to image reading without AI. Lung cancer screening (LCS) with LDCT detects lung cancer earlier and leads to a reduction in lung cancer mortality by 20-24% (as shown in the NLST and NELSON RCTs). However, implementing LCS may exacerbate the workload of radiologists. Artificial intelligence(AI) exhibits promising outcomes in the detection, segmentation, and classification of lung nodules for LCS. Despite encouraging findings, AI assisted image reading is rarely used within clinical practice.
Methods or Background: A cost-consequence analysis was conducted, capturing costs and effects of different LCS scenarios at baseline from a healthcare perspective. Essential model inputs included; eligible population, screening population, image reading time by radiologists, average weighted time, image reading time by AI, costs, screening effectiveness without AI, and discrepancies in image reading. Control scenario: LCS without AI-assisted image reading. Two radiologists independently read all CT scans. Scenario A: LCS with AI as a parallel-reader. AI read all CT-scans in parallel with a radiologist and the discrepant results assessed by a consensus radiologist. Scenario B: LCS with AI as a first-reader. AI read all CT-scans first, then a radiologist confirmed positive scans and identified false-positive classifications.
Results or Findings: LCS with AI-assisted image reading has the potential to reduce image reading costs by 37% and 73%, in Scenario A and B respectively (total reading costs [Control: €29,676,879; Scenario A: €18,704,843; Scenario B: €8,146,251]). Additionally, utilising AI as the first reader may reduce the radiologists’ workload.
Conclusion: The incorporation of AI-assisted image reading into LCS yields substantial reductions in costs associated with image reading. Our findings support AI utilisation in LCS to alleviate constraints on healthcare resources.
Limitations: No limitations were identified.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: Performed using published data and expert opinions.
7 min
Deep learning assisted curation of the CANDID-III dataset with free-text reports
Sijing Feng, Melbourne / Australia
Author Block: S. Feng1, Q. Liu2, D. Ritchie3, B. K. J. Wilson3; 1Mosgiel/NZ, 2Auckland/NZ, 3Dunedin/NZ
Purpose: This studay aimed to curate the CANDID-III dataset, which consists of adult chest radiographs with comprehensive labels derived from both manual and AI-assisted annotation.
Methods or Background: The CANDID-II dataset is an in-development chest radiograph dataset containing 33,486 anonymised free-text radiological reports. CANDID-III inherited the same 45 radiological labels from the CANDID-II dataset, which were mapped to UMLS ontology for standardisation, forming the manually labelled portion of the CANDID-III dataset. An ensemble transformer-based label extraction model was trained and validated on the CANDID-II dataset in an 80:20 proportion. The model was then used to automatically label the remaining CANDID-III dataset. An evaluation set of 552 reports was assessed by selected annotation team members. Label-specific ‘mention’ F1 scores were calculated for the final ensemble model, with ‘not mentioned’ as negative and ‘indeterminate, absent, present’ as combined positive classifications.
Results or Findings: The completed CANDID-III dataset contains 322,473 images and 220,977 anonymised free-text radiological reports from 94,210 unique patients (1:1.04 M:F ratio). AI-assisted annotation was performed on 88% of the CANDID-III dataset. For the AI-assisted annotation portion of the CANDID-III dataset, the labelling model has a macro-F1 score of 0.88 and micro-F1 score of 0.94 across all findings. Seven labels are shared with CheXpert, with F1 scores ranging from 0.93 to 1.0. F1 scores for 30 CANDID-III labels are above 0.90, while 8 labels range between 0.80 and 0.90.
Conclusion: The CANDID-III dataset adds numerous new clinically significant radiological annotations that are labelled to a high accuracy. It contributes to the repertoire of publicly available chest radiograph datasets for AI development. Instructions to access the dataset can be accessed at DOI: 10.17608/k6.auckland.22726004.
Limitations: Single institution dataset, radiologists' opinion is used as label ground truth rather than objective quantitative measures.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: The study was approved by the University of Otago Human Ethics Committee.
7 min
Sixteen thousand and counting: performance of an artificial intelligence tool for identifying common pathologies on chest radiographs and report prioritisation
Carolyn Horst, London / United Kingdom
Author Block: C. Horst1, N. Ruechuseth1, C. Allwin1, V. Naidu1, Y. Zhu2, R. O'Shea1, C. Goncalves1, M. Narbone1, V. Goh1; 1London/UK, 2Warwick/UK
Purpose: Chest radiograph (CXR) artificial intelligence (AI) tools may streamline reporting times and improve patient outcomes through decision-support functionalities. However, clinical uptake has been limited and a better understanding of their accuracy at different probability thresholds for different use cases is required.

The objective of this study is to better understand the accuracy at different probability thresholds for different AI use cases. Chest radiograph (CXR) artificial intelligence (AI) tools may streamline reporting times and improve patient outcomes through decision-support functionalities, however, clinical uptake has been limited.
Methods or Background: 16996 CXRs were retrospectively scored from 0-100 probability by an AI tool for 8 common pathologies. Corresponding historical free text reports were processed by a natural language processing (NLP) model to establish ground truth. Sensitivities and specificities for the eight findings were calculated for four positive AI score thresholds (5, 15, 30 and 45). A composite label of 'normal' was created, where none of the individual labels were above the given probability threshold, and sensitivities and specificities calculated.
Results or Findings: Per-finding sensitivities ranged from 0.46-0.94 and specificities from 0.53-0.99, depending on pathology and positive threshold. For ‘normal’ CXRs, sensitivities ranged from 0.51-0.82, and specificities from 0.82-0.95.
Conclusion: Our analysis demonstrates the importance of acceptable thresholds for ‘positive’ findings for different pathologies. A very high sensitivity may be appropriate for emergency findings e.g. pneumothorax, with a sacrifice in the specificity. Conversely, a high specificity is preferable to triage low risk studies for reporting without missing actionable pathology. Sensitivities and specificities at the approved threshold of 15 provide a balance of sensitivity and specificity. Our study demonstrates a flexible approach to using AI for CXR analysis of common abnormalities, and the possibility of using the tool for identifying 'normal' radiographs for triaging purposes.
Limitations: There may be inaccuracies in the NLP outputs that have not been controlled for, and the AI definitions of certain pathologies may not align with radiologists’ local reporting practices. This effect was partially mitigated by the large number of analysed cases.
Funding for this study: No funding was provided for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: Retrospective study.
7 min
Glioblastoma patient monitoring using a large language model: accurate and effective summarisation of radiological reports with GPT-4
Robert Angelo Terzis, Cologne / Germany
Author Block: R. A. Terzis, K. R. Laukamp, J-M. Werner, N. Galldiks, S. Lennartz, D. Maintz, M. Schlamann, M. Schoenfeld, J. Kottlors; Cologne/DE
Purpose: The purpose of this study was to evaluate this possibility, particularly focusing on the capacity of LLMs to extract meaningful information from complex textual input. Monitoring of glioblastoma patients involves multiple MRI scans, making the process complex and resource-heavy. The advent of large language models (LLMs) presents an opportunity to facilitate physician support by summarising radiological results and disease tracking data.
Methods or Background: We retrospectively included 225 examinations from 45 patients with biopsy-confirmed glioblastoma who were treated at our institution. The large language model, GPT-4, was supplied with the five most recent MRI reports, including clinical information in text form. The model's task was to synthesize the disease course, present the current state, and produce the R-code for a suitable graphic representation. Summaries generated by GPT-4 were evaluated by two expert neuro-oncologists experts with >20 and >8 years of experience, respectively. The evaluation categories included: (1) accuracy and logical-semantic representation, determined by assessing four distinct items on a binomial scale of "yes" or "no"; (2) overall quality; and (3) utility in patient monitoring and therapeutic decision-making, assessed using a 5-point Likert scale, with higher scores indicating more favourable results.
Results or Findings: The summaries from GPT-4 matched the expert consensus on the disease progression 86.7% of the time. GPT-4's disease course summaries received a median score of 4 in terms of quality and were perceived to have a median utility score of 3.
Conclusion: GPT-4 effectively outlined the disease progression with significant precision, value, and relevance for clinicians. Our results underline the potential of large language models for radiological and medical workflow optimisation.
Limitations: Limitations are reliance on text-only data, the GPT-4 model's knowledge cutoff in 2021, the "black-box" problem and a single-center linguistic focus.
Funding for this study: This study was funded in part by the German Federal Ministry of Education and Research Network of University Medicine 2.0 (Grant no. 01KX2121).
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: This study received ethical approval, and informed consent was waived due to the retrospective design. No patient-identifying information was provided to the artificial intelligence.
7 min
Initial experience of a fully digital workflow for radiological evaluation in clinical trials
Martin Scott, Uppsala / Sweden
Author Block: M. Scott1, J. Burwick Nyberg1, T. Sundin1, M. Gelotte1, P. Eckerbom1, J. Wikström1, P. Liss1, T. Bjerner2; 1Uppsala/SE, 2Linköping/SE
Purpose: The aim of this project was to use a research PACS and set up a fully digital workflow for radiologic evaluation in clinical trials. Radiological evaluation of tumour response during oncologic treatment is an important task for many radiology departments. Reporting of such evaluation has previously been and is still often documented on paper and not digitally in the PACS (Picture Archiving and Communication System).
Methods or Background: This project was carried out at the Department of Radiology, Uppsala University Hospital. Studies to be radiologically evaluated in a research trial were pseudonymised using RSNA CTP and exported from the clinical Philips Vue PACS to an external server, the research PACS, consisting of another Philips Vue PACS with some adaptations to its configuration. A notification of a new study to evaluate was then sent by email to the reading radiologist by the CTP. A structured report template was set up in the research PACS where findings were marked and hyperlinks were included. The report can then be read, and the hyperlinks used by the oncologist in the Philips VueMotion web-interface of the research PACS. The report also includes graphs visualising lesion development over time.
Results or Findings: Since the start of the digital workflow in September 2023 it has rapidly been adopted by the research nurses and the radiologists involved and the workflow is now considered more efficient and consistent. For the oncologist involved in the study it is a great advantage to be able to easily see the measurements and the graphs in the web viewer. Time is saved by reduced paper handling.
Conclusion: A digital workflow significantly improves the handling of oncology studies that includes radiological evaluation of tumour response to treatment.
Limitations: No limitations were identified.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: This is not a clinical study.
7 min
Reshaping CT imaging workflow through intelligent AI orchestration using RAPID: radiologic automated processing for image distribution
René Hosch, Essen / Germany
Author Block: R. Hosch, V. Parmar, J. Kohnke, K. A. Borys, K. Arzideh, G. Baldini, J. Haubold, L. Umutlu, F. Nensa; Essen/DE
Purpose: The purpose of this study was to introduce RAPID, an algorithm for swiftly and automatically orchestrating images based on detected anatomical landmarks and body regions.In the rapidly evolving medical AI field, radiologists are incorporating AI models into clinical practice, aiming for enhanced efficiency and workflow optimisation. This necessitates the implementation of an "Orchestrator" capable of automatically directing images to appropriate AI models without manual intervention. Existing CT solutions predominantly rely on DICOM tags, which offer limited and often unreliable information like SeriesDescription.
Methods or Background: 13,211 abdominal and 6,789 whole-body CT scans from 20,000 patients (42.75% female) were used. Topograms from these scans were employed for three tasks: classification (torso, head-neck, hands, legs), region detection (head, brain, pericardium, thorax, abdomen), and organ detection (lung, heart, spine, liver, kidneys, spleen, stomach, colon, pancreas, brain, hip). Series specific organ and body region segmentations generated using the Body and Organ Analysis algorithm (BOA) were mapped onto topograms using DICOM geometry as ground truth. YOLOv8 models were trained for classification and object detection and evaluated using F1-score and mean Average Precision (mAP0.5).
Results or Findings: Classification achieved a weighted F1-score of 0.92. Region detection reached 0.96 mAP, while organ detection scored 0.94 mAP. After topogram-based robust classification and detection, orchestration rules were established to automatically route series to suitable AI models if they met the model’s prerequisites.
Conclusion: RAPID accurately and efficiently locates body regions and organs in CT scans using topograms. These landmarks facilitate series orchestration for AI applications. RAPID employs "deep content inspection" for precise routing decisions, prioritizing image data over manual entered meta data.
Limitations: The trained models should be evaluated on external datasets. In addition, the number of relevant landmarks for object detection should be extended.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: This study adhered to all guidelines defined by the approving institutional review board of the investigating hospital. The Institutional Review Board waived written informed consent due to the study's retrospective nature. Complete anonymisation of all data was performed before inclusion in the study.
7 min
A novel radiology communication tool to reduce workflow interruptions: clinical evaluation of RadConnect
Sandra Vosbergen, Eindhoven / Netherlands
Author Block: M. Sevenster1, K. Hergaarden2, O. Hertgers2, N. Kruithof2, J. Roelofs2, S. Romeijn2, D. D. Nguyen1, S. Vosbergen1, H. J. Lamb2; 1Amsterdam/NL, 2Leiden/NL
Purpose: The objective of this study was to test the hypothesis that a novel asynchronous communication tool (RadConnect) reduces radiologist workflow interruptions. Effective stakeholder communication across the imaging value chain is a crucial responsibility of radiologists. However, typically communication tools are used that were not created specifically for the unique needs of imaging. This contributes to frequent radiologist interruptions.
Methods or Background: We conducted a difference-in-difference before-after study. Before adoption of RadConnect, technologists used three conventional communication methods to consult radiologists (in-person, telephone, general-purpose enterprise chat [GPEC]). After adoption, participants used RadConnect as a fourth. Technologists manually recorded every radiologist consult request related to neuro and thorax CT scans in the 40 days before and 40 days after adoption of RadConnect. Telephone traffic volume to section beepers was obtained from the hospital telephone system. The abdomen beeper was included as control group. The value and usability experiences were collected through an electronic survey and structured interviews.
Results or Findings: Adoption of RadConnect resulted in 53% reduction of synchronous (in-person, telephone) consult requests: from 6.1±4.2 per day to 2.9±2.9 (P < 0.001). There was a 77% decrease (P < 0.001) in telephone volume to the neuro and thorax beepers, while no significant volume change was noted to the abdomen beeper (control). The positive impact of RadConnect on workflow interruptions was not solely observed through statistical analysis, but was also confirmed through the survey (46% response rate) and interviews.
Conclusion: RadConnect significantly reduced workflow interruptions. RadConnect differentiates from a general chat application by role-based interaction and a prioritized worklist overview, which was valued by study participants. Future iterations of RadConnect can potentially contribute to a more focused work environment.
Limitations: This was a single-centre study with use limited to CT scans for select sections.
Funding for this study: Funding was received from Philips.
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: The Internal Committee for Biomedical Experiments of Royal Philips (Amsterdam, The Netherlands) approved this study (ICBE-S-000556) and it was registered in clinicaltrials.gov (NCT05540444). Protocol review was waived by the Medical Ethics Committee of the Leiden University Medical Center (N22.056)
7 min
Enhancing radiology workflows through efficient x-ray image-based orchestration and classification
Judith Kohnke, Essen / Germany
Author Block: J. Kohnke, R. Hosch, J. Haubold, V. Parmar, K. A. Borys, K. Arzideh, L. Umutlu, F. Nensa; Essen/DE
Purpose: The purpose of this study was to present a classification algorithm to classify up to 12 different X-ray procedures in milliseconds. In radiology, the X-ray modality consistently stands out as the predominant diagnostic procedure in terms of utilization. With the exponential growth in radiological examinations and the concurrent proliferation of artificial intelligence (AI) integrations, there is an emergent demand for an image-based routing system. This system should be adept at automated image orchestration while also being proficient in precise image classification, ensuring the optimisation of data quality and supporting automatic AI workflows.
Methods or Background: An internal dataset of 15,502 x-rays encompassing various anatomical regions for this study was collected which contains the following classes: knee (n=1,676), pelvis (n=692), foot (n=1,686), ankle (n=1,664), wrist (n=1,620), thigh (n=1,183), hips (n=1,496), thorax (n=893), shoulder (n=1,633), thorax lying (n=639), lumbar spine (n=813), and hands (n=1,507). Each class was split using an 80/20 approach for the initial training process. For image classification, the YoloV8 algorithm was employed, and F1-score, sensitivity, and specificity metrics were used for evaluation.
Results or Findings: The algorithm demonstrated strong performance with an overall sensitivity of 0.997 and an overall specificity of 0.970. In addition, the model reached an overall F1-score of 0.970, highlighting a robust classifier performance.
Conclusion: The presented algorithm shows an accurate and reliable classification performance which could benefit X-ray orchestration and data quality in radiology.
Limitations: The trained algorithm should be expanded using more detailed classes for specific X-ray types and evaluated on external data.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: This study adhered to all guidelines defined by the approving institutional review board of the investigating hospital. The Institutional Review Board waived written informed consent due to the retrospective nature of the study. Complete anonymisation of all data was performed before inclusion in the study.
7 min
Diagnostic accuracy and time efficiency of a novel deep learning algorithm for the assessment of intracranial haemorrhage: first results
Christian Booz, Frankfurt a. Main / Germany
Author Block: C. Booz1, G. M. Bucolo2, V. Koch1, L. D. Gruenewald1, L. S. Alizadeh1, A. Gökduman1, T. D'Angelo3, T. J. Vogl1, I. Yel1; 1Frankfurt a. Main/DE, 2Barcellona Pozzo di Gotto/IT, 3Messina/IT
Purpose: The objective of the study was to evaluate the diagnostic accuracy and time efficiency of a deep learning-based pipeline using a Dense U-net architecture for the assessment of intracranial hemorrhage (ICH) in unenhanced head CT scans.
Methods or Background: This retrospective study included 502 CT scans of 502 patients (mean age, 70 ± 13 years; 248 men and 254 women) who had undergone an unenhanced head CT scan for the assessment of ICH. All CT scans were analysed by the algorithm and a board-certified radiologist independently for the presence of ICH. In case of ICH presence, ICH had to be defined as intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), subarachnoid hemorrhage (SAH), subdural hemorrhage (SDH) and epidural hemorrhage (EDH). Additionally, the time until first temporary diagnosis of ICH was measured. Three board-certified radiologists analysed the CT scans in consensus reading sessions to establish the standard of reference for hemorrhage presence and classification.
Results or Findings: The reference standard revealed a total of 554 different ICH presences (IPH, n=172; IVH, n=26; SAH, n=163; SDH, n=178; EDH, n=15). The algorithm showed a high diagnostic accuracy for the assessment of ICH with a sensitivity of 92%, specificity of 95% and an accuracy of 93%. Concerning the most frequently present different ICH types in this study, the sensitivity was 92%, 93% and 93% (IPH, SAH and SDH, respectively), and the specificity was 95%, 96% and 95% (IPH, SAH and SDH, respectively). Regarding analysis time, the algorithm was significantly faster compared to the temporary report of the assigned radiologist (15 ± 2 s vs 277 ± 14 s, p < 0.001).
Conclusion: A novel deep learning algorithm provides high diagnostic accuracy combined with time efficiency for the identification and classification of ICH in unenhanced CT scans.
Limitations: Single-centre retrospective study.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: This study was approved by the local IRB.

This session will not be streamed, nor will it be available on-demand!