Research Presentation Session: Artificial Intelligence & Machine Learning & Imaging Informatics

RPS 1305 - Strategic deployment of AI

March 1, 09:30 - 11:00 CET

7 min
Early platform release of the federated European cancer imaging infrastructure
Ignacio Blanquer, Valencia / Spain
Author Block: A. S. Alic1, D. Arce Grilo1, M. Birhanu2, E. Bron2, V. Kalokyri3, T. Kussel4, K. Lang5, K. Majcen5, I. Blanquer1; 1Valencia/ES, 2Rotterdam/NL, 3Heraklion/GR, 4Heidelberg/DE, 5Graz/AT
Purpose: EUCAIM (https://cancerimage.eu/) is a pan-European federated infrastructure for cancer images, fueling AI innovations.
Methods or Background: This federated infrastructure is built upon a set of core services that comprise a public metadata catalogue, a federated search service following a common hyperontology, an access negotiation system, a coherent AAI and a distributed processing service. EUCAIM has recently released an early prototype with 40 image datasets from nine cancer types (breast, colon, lung, prostate, rectum, liver, diffuse intrinsic pontine glioma, neuroblastoma, glioblastoma) registered, related to the five projects in the AI4HI network (EUCANIMAGE, ProCAncer-I, INCISIVE, CHAIMELEON and PRIMAGE - https://future-ai.eu/), for a total of more than 200,000 image series from approximately 20,000 individuals. These collections follow a common metadata model defined in the EUCAIM project.
Results or Findings: This early prototype comprises a dashboard with guiding instructions, a public catalogue, a federated search engine and an access negotiation system in beta version.
Conclusion: This platform will permit users to discover, search, request, access and process medical imaging and associated clinical data in a flexible manner, supporting federated providers with different access levels and a future centralised repository. EUCAIM is based on cloud and container technologies, and it will be linked to intensive computing infrastructures such as EGI and supercomputing centres.
Limitations: The access negotiation service is currently in beta version and access requests will be forwarded to the providers.
Funding for this study: This project is co-funded by the European Union under grant agreement 101100633
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: The project acts as a broker for accessing data and relies on the ethical approvals of the providers and requesters.
7 min
Radiology AI deployment and assessment rubric (RADAR) for value-based AI in radiology
Jacob Johannes Visser, Rotterdam / Netherlands
Author Block: B-J. Boverhof1, K. Redekop1, D. Bos1, M. P. A. Starmans1, J. Birch2, A. G. Rockall3, J. J. Visser1; 1Rotterdam/NL, 2Poole/UK, 3Godalming/UK
Purpose: The aim is to provide a comprehensive framework for value assessment of AI for radiology.
Methods or Background: This paper presents the RADAR framework, which has been adapted from Fryback and Thornbury's imaging efficacy framework and facilitates valuation of radiology artificial intelligence (AI) from conception to local implementation. Special attention is placed on local efficacy to underscore the importance of appraising an AI system in its local environment. The RADAR framework is illustrated through a myriad of study designs that help conduct adequate valuation.
Results or Findings: The RADAR approach constitutes a seven-levelled-hierarchy, providing radiologists, researchers, and decision-makers with a conceptual framework for comprehensive AI valuation in radiology. RADAR is dynamic, catering to varying valuation throughout the AI's developmental cycle. Technical and diagnostic efficacy (RADAR-1 and RADAR-2) is assessed before clinical implementation and can be addressed by in-silico clinical trials and cross-sectional studies. The next phases, encompassing diagnostic thinking to patient outcome efficacy (RADAR-3 to RADAR-5) necessitate clinical integration and can be addressed through randomised controlled trials and cohort studies. Societal efficacy (RADAR-6) delves into broader societal implications, assessed through health-economic evaluations. Concluding the hierarchy, the extent to which previous assessments generalise locally (RADAR-7) are gauged with budget impact analysis and multi-criteria decision analysis.
Conclusion: The RADAR framework stands as a comprehensive solution for valuing radiology. With its progressive and hierarchical approach, as well as an emphasis on local efficacy, RADAR provides a comprehensive framework to illustrate radiology AI's value conform to the notion of value-based radiology.
Limitations: No limitations were identified.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: No information provided by the submitter.
7 min
Knowledge of AI governance, perceived challenges, opportunities, and suggestions for AI implementation by UK radiographers
Nikolaos Stogiannos, Corfu / Greece
Author Block: N. Stogiannos1, T. J. T. O'Regan1, M. Pogose1, H. Harvey1, A. Kumar1, R. Malik2, A. Barnes1, M. F. F. McEntee3, C. Malamateniou1; 1London/UK, 2Farnworth/UK, 3Cork/IE
Purpose: Radiographers are key stakeholders in AI use for clinical imaging and radiation therapy. AI Implementation is key to harness the potential benefits of AI innovation. Knowledge of AI governance by all healthcare professionals is vital for AI implementation in clinical practice. This study aims to explore UK radiographers’ knowledge and perceptions on AI governance.
Methods or Background: An online survey on Qualtrics was distributed to UK-practicing radiographers via social media. Eligible respondents needed to have theoretical knowledge and/or practical expertise in the use of AI in medical imaging and/or radiation therapy. Descriptive and inferential statistics was used to analyse quantitative data and content analysis for open-ended questions.
Results or Findings: There were 88 valid responses. Lack of training, guidance, and funding are the most important challenges to AI implementation, as perceived by radiographers. Many radiographers (36.9%) were unaware of evaluation methods for AI tools, whilst 56.6% hadn’t received any AI-specific training. Robust governance frameworks (30.7%), customised training (27.3%), and patient and public involvement (21.6%) were noted as strategic priorities by respondents.
Conclusion: Effective leadership, allocated time, and tailored training will contribute to successful AI implementation. Further research is needed to ensure radiographers can harness the benefits and minimise risks of AI.
Limitations: Selection bias might have occurred in this study, since data was collected online. Also, the skewed geographical distribution of the respondents may further limit the generalisability of the results.
Funding for this study: This study received funding from the College of Radiographers CORIPS grant scheme (grant number: 209) and the City Radiography Research Fund.
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: This study was approved by the City, University of London School of Health and Psychological Sciences Research Ethics Committee (reference: ETH2122-1015).
7 min
Black box no more: a survey to explore AI adoption and governance in medical imaging and radiation therapy in the UK
Nikolaos Stogiannos, Corfu / Greece
Author Block: N. Stogiannos1, T. J. T. O'Regan1, A. Barnes1, A. Kumar1, R. Malik2, M. Pogose1, H. Harvey3, M. F. F. McEntee4, C. Malamateniou1; 1London/UK, 2Farnworth/UK, 3Banstead/UK, 4Cork/IE
Purpose: The clinical use of AI tools in medical imaging and radiation therapy (MIRT) has highlighted challenges to AI adoption and governance for healthcare professionals. This study aims to map the perceived challenges around clinical adoption of AI. Opportunities associated with AI and suggestions for future implementation are explored.
Methods or Background: A multidisciplinary online survey on Qualtrics® was designed using expert focus groups and published literature and piloted (n=9) before distribution. It was shared via social media and professional networks to all MIRT professionals in the UK. Data was analysed using descriptive and inferential statistics on the SPSS software, whilst content analysis was employed for the open-ended questions.
Results or Findings: A total of 245 valid responses were received from different MIRT professionals. Lack of knowledge of AI governance frameworks was noted (42.1%). Prior AI training was significantly correlated with understanding of AI governance concepts (p=0.007 for MHRA and 0.001 for ISO standards). Respondents indicated that clear governance frameworks (11.4%), AI training (9%) and effective leadership (8.5%) are vital for successful AI adoption.
Conclusion: Knowledge of, and confidence in AI technologies correlate with prior AI-related training. Different professionals were familiar with frameworks related to their practice. Tailored AI training is needed to address knowledge gaps for a safe and successful AI adoption in medical imaging and radiation therapy in the UK.
Limitations: The small sample size of this study means results cannot be generalised to the broader UK medical imaging and radiation therapy AI ecosystem.
Funding for this study: This study received funding from the College of Radiographers CORIPS grant scheme (grant number: 209) and the City Radiography Research Fund.
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: This study was approved by the City, University of London School of Health and Psychological Sciences Research Ethics Committee (reference: ETH2122-1015).
7 min
Radiographer education and learning in artificial intelligence (REAL-AI)
Geraldine Doherty, Belfast / United Kingdom
Author Block: G. Doherty1, L. McLaughlin1, R. Bond1, J. McConnell2, C. Hughes1, S. L. McFadden1; 1Belfast/UK, 2Leeds/UK
Purpose: Artificial intelligence (AI) is widespread in medical imaging, yet there is a paucity of information on education and training available for staff. Further research is required to identify what training is available, and what preparations are required to bring AI knowledge to levels that will enable radiographers to work competently alongside AI. This study aimed to: a) investigate current provision of AI education at UK higher education institutes (HEIs); b) explore the attitudes and opinions of educators.
Methods or Background: Data were collected through two online surveys: 1) UK HEIs; 2) medical imaging educators. The surveys were distributed in the UK by the heads of radiography education (HRE), The Society of Radiographers and as part of the Research Hub at ECR 2023. The study was promoted on LinkedIn and Twitter (X), and through university channels.
Results or Findings: Responses were received from 22 HEIs in the UK and 33 educators from across Europe. Data analysis is ongoing, but preliminary findings show that 68.2% (n=15) of responding HEIs claim to have introduced AI into the curriculum already. 84.8% (n=28) of educators claim they themselves have received no training on AI despite having to embed it into the curriculum. The main reason for this, as cited by HEIs, is limited resources. 69.7% (n=23) of educators believe that AI concepts should be taught by an AI expert.
Conclusion: By surveying educators and HEIs separately, this study captured two different perspectives regarding the provision of AI education. This unique insight highlighted disharmony between HEIs and educators. Preliminary insights highlight that educators feel unprepared to deliver AI content, and HEIs are under pressure to add AI concepts to an already full curriculum.
Limitations: An identified limitation was that surveys, focus groups and interviews were conducted in the English language only.
Funding for this study: This project has been part-funded by a College of Radiographers Industry Partnership Scheme, grant number 229 (AI).
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: This study was approved by the Ulster University Filter Committee. Reference numbers: FCNUR-23-051 / FCNUR-23-006-A.
7 min
International medical students' perceptions towards artificial intelligence in medicine: a multicentre, cross-sectional survey among 192 universities
Felix Busch, Berlin / Germany
Author Block: F. Busch1, L. Hoffmann1, D. Truhn2, M. Makowski3, K. K. Bressem1, L. C. Adams3; 1Berlin/DE, 2Aachen/DE, 3Munich/DE
Purpose: Artificial intelligence (AI) is set to fundamentally change the educational and professional landscape for the next generation of physicians worldwide. This study aimed to explore the current international attitude of medical students towards AI in the medical curriculum and profession on a large, global scale and identify factors that shape their attitudes.
Methods or Background: This multicentre, multinational cross-sectional study developed and validated an anonymous online survey of 15 multiple-choice items to assess medical, dentistry, and veterinary students' preferences for AI events in the medical curriculum, the current state of AI education, and students' AI knowledge and attitudes towards using AI in the medical profession. Subgroup analyses were performed considering gender, age, study year, tech-savviness, prior AI knowledge and AI events in the curriculum, and university location.
Results or Findings: Between April and October 2023, a total of 4,313 medical, 205 dentistry, and 78 veterinary students from 192 faculties and 48 countries responded to the survey. Most participants came from European countries (n=2,350), followed by North/South America (n=1,070) and Asia (n=944). Students showed predominantly positive attitudes towards AI in medicine (67.6%, n=3,091) and expressed a strong desire for more AI education (76.1%, n=3,474). However, they reported limited general knowledge of AI (75.3%, n=3,451) and felt inadequately prepared to use AI in their future careers (57.9%, n=2,652). Subgroup analyses revealed differences in attitudes between students from the Global South and North and on the continental level, among others.
Conclusion: This large-scale international study underlines the generally positive attitude of medical students towards the application of medical AI and explores variables that influence such attitudes. Our study highlights the necessity for a greater emphasis on AI education within medical curricula.
Limitations: The unequal regional representation and selection bias were identified as limitations.
Funding for this study: The authors report the results on behalf of the COMFORT consortium, an initiative of the Horizon Europe-funded COMFORT project (101079894).
Has your study been approved by an ethics committee? Yes
Ethics committee - additional information: This study was approved by the IRB, with approval code: EA4/213/22.
7 min
AI in routine teleradiology use: results of a large-scale test across Germany and Austria
Torsten Bert Thomas Moeller, Dillingen / Germany
Author Block: T. B. T. Moeller, P. F. W. Sögner; Dillingen/DE
Purpose: The objective of this study was to answer the question of whether the use of AI is already having a quality-improving effect in routine teleradiological reporting throughout Germany and Austria.
Methods or Background: We performed a study of 2,707 native CCT scans from the CT departments of 140 hospitals in Germany and Austria between March and April 2022, which were analysed using AI with haemorrhage analysis. The results were compared with the findings of more than 70 teleradiologists who did not have the AI results at that time. Possible discrepant findings were evaluated by two radiologists with specific neuroradiological CCT experience.
Additionally, the in-house error statistics from 2021 and 2022-23 were reviewed.
Results or Findings: Of the 2,707 CCT examined by both radiologists and AI, 189 cases (approximately seven percent) were found to have intracranial haemorrhage described by both radiologists and AI. In 30 patients there was a discrepancy: the AI had seen a haemorrhage that had not been described by the radiologist. These cases were subsequently re-evaluated. Twelve (approximately 40%) of the 30 unclear examinations were classified as false positives by the AI, eight cases as questionable positives, and 10 cases as true positives. Thus, there were 199 cases with ICB in the studied patient group, of which > 5% were primarily missed by radiologists without AI support.
A review of in-house error statistics also revealed a significant decrease in reported false findings for intracranial haemorrhage (from 16 in 2021 to one between 08/31/2022 and 08/31/2023).
Conclusion: The positive effects of AI on the quality of radiological reporting postulated in several studies can also be confirmed in practice and especially in the teleradiological context.
Limitations: This assumption should be substantiated by further studies.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: No information provided by the submitter.
7 min
Artificial intelligence should only read a mammogram when it is certain: a hybrid breast cancer screening reading strategy
Sarah Delaja Verboom, Nijmegen / Netherlands
Author Block: S. D. Verboom, J. Kroes, S. Pires, M. Broeders, I. Sechopoulos; Nijmegen/NL
Purpose: The aim of this study was to incorporate and evaluate uncertainty quantification metrics in an artificial intelligence (AI) breast cancer detection model and test their ability to guide a novel hybrid reading strategy in breast cancer screening in which recall decisions are only made by standalone AI when it exhibits high certainty.
Methods or Background: Uncertainty quantification metrics were obtained from a modified version of a commercial AI breast cancer detection model by structured Monte Carlo dropout. The metrics were defined as the variance or entropy of one or all suspicious regions and used to estimate the certainty of the AI malignancy-present decision. With the proposed hybrid reading strategy, the recall decision is based on AI only when the predictions are classified as certain, and by standard radiologist-double reading otherwise. The new reading strategy was retrospectively tested on a previously-unseen subset of all digital mammographic screening examinations acquired between 2003-2018 from a unit of the Dutch National Breast Cancer Screening (n=41,469) with minimal 2-year follow-up.
Results or Findings: The best-performing uncertainty metric was the entropy of the mean output for the most suspicious region per case. The hybrid reading strategy using this uncertainty metric and a recall rate equal to the standard radiologist-double-reading strategy (27 per 1000) resulted in 46% of cases read by AI only and a cancer detection rate of 8.1 per 1000, which does not differ from the standard strategy (8.0 per 1000, p=0.217). The mean AUC of the AI model increased from 0.957 (95% CI 0.944-0.969) for all cases to 0.984 (95% CI 0.970-0.995) for the 46% of cases classified as certain (p<0.001).
Conclusion: Leveraging AI uncertainty to guide a hybrid AI-radiologist screening reading strategy can potentially reduce workload by ~46% without decreasing performance.
Limitations: Identified limitations were that this was a retrospective study with single-site data.
Funding for this study: aiREAD was financed by NWO, KWF, HH.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: According to the Dutch Central Committee on Research involving Human Subjects, ethical
approval was not necessary.
7 min
Setting up a complaint data registry for research on the human: a Swiss experience
Benoît Dufour, Sion / Switzerland
Author Block: B. Dufour1, B. Rizk1, C. Thouly1, H. Brat1, N. Heracleous1, D. Goyard2, P. Petetin3, F. Zanca4; 1Sion/CH, 2Paris/FR, 3Berre l'Etang/FR, 4Leuven/BE
Purpose: Since 2014, the Law on Human Research (LRH) in Switzerland protects individuals participating in human research projects, while ensuring quality and transparency.
We detail the establishment of a Complaint Data Registry (CDR) within a private radiology network in Switzerland.
Methods or Background: Data in the registry encompass DICOM images, examination reports, and clinical/demographic information.
Key elements in creating the registry included defining its purpose and objectives, establishing governance (legal structure, general informed consent, access rights), and outlining operational procedures (data storage duration, pseudonymisation, encryption key access).
For governance, we structured the organisational framework and designated responsible individuals.
A workflow for informed consent, including consent for AI-based image analysis, was implemented. Patients receive an SMS before appointments, granting access to information about the data registry and consent process. Patients can opt in or out for research by digitally signing the consent form on their smartphone or at the centre on the day of the exam. Signed consents are stored in our RIS, allowing radiologists to identify approved research and AI-analysed data.
For the operational processes, data are collected on a gateway, pseudoanonymised and sent to a cloud platform for storage, while ensuring segregation based on the data's source sites and projects.
Results or Findings: Results showed that 780,000 research consents were automatically stored in the RIS database between 18.01.2023-03.10.2023, with 678,235 consenting research data reuse (87%). Since implementing the registry, patient consent for AI-based data analysis increased from 56% to 92%.
Conclusion: Our experience in setting up a CDR could serve as a promising model for other institutions seeking to improve healthcare outcomes by leveraging complaint data.
Limitations: The Swiss context might be different in other countries and other RIS systems might not guarantee the same level of integration.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: The study is about setting up a data registry.
7 min
Advancements in generative AI for radiological imaging
Can Ozan Tan, Enschede / Netherlands
Author Block: E. Hofmeijer, X. Zu, C. O. Tan; Enschede/NL
Purpose: Generative artificial intelligence (AI) has emerged as a transformative force in the field of radiology. It can empower radiologists with tools to enhance image quality, reconstruct degraded data, and synthesize realistic images, improving diagnostic accuracy and efficiency. In particular, generative AI enables creation of synthetic datasets that facilitate training algorithms, as well as residents and fellows, when real-world data is scarce or difficult to obtain due to privacy concerns.
Methods or Background: We have recently developed a pipeline for creating artificial 2D radiologic images. Publicly available standard and low-dose chest CT images (805 scans; 39,803 2D images, 17% containing lung nodules) were used to generate synthetic image. Five radiologists with experience in chest and thoracic imaging were asked to assess synthetic image quality compared to the real ones.
Results or Findings: Radiologists rated artificial images as 3.13 ± 0.46 (1 [unrealistic] to 4 [indistinguishable to the original image]), close to their rating of the original images (3.73 ± 0.31). An extended diffusion-based model was then used to identify features of the lung nodules that distinguish malignant versus benign ones and to generate further synthetic images that should reflect these features. The accuracy of malignant/benign classification based on synthetic images reached an accuracy of 85.5%.
Conclusion: Our results show that synthetic radiologic images are realistic and reliably adhere to the key radiographic features that are reflective of pathological changes. These results, when shown to be reliable across imaging modalities, organs, and pathologies, can enable tailored synthetic images on individual, personalised, patient profiles ("digital twins").
Limitations: The ethical considerations surrounding the use of generative AI in radiology need to be addressed.
Funding for this study: This study was funded by a ZonMw Innovative Medical Devices Initiative (IMDI) subsidy for the B3CARE project (dossier number: 10-10400-98-008).
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: The data used for this work is based on publicly available sources: Lung Image Database Consortium (LIDC) and Image Database Resource initiative (IDRI).
7 min
Improving CT justification practices with machine learning and deep learning: a multi-site study
Jaka Potočnik, Dublin / Ireland
Author Block: J. Potočnik, E. M. Thomas, A. Lawlor, D. Kearney, R. P. Killeen, E. J. Heffernan, S. J. Foley; Dublin/IE
Purpose: The aim of this study was to compare human experts with machine learning (ML) and deep learning (DL) models for assessing justification of CT brain referrals. Multiclass classification of the anonymised referrals with ML and DL determined if prediction models could generalise and automate this process.
Methods or Background: Anonymised adult brain CT referrals performed in 2020 and 2021 were sourced from three Irish CT centres. A total of 3,000 referrals were randomly selected. Two radiologists and radiographers retrospectively categorised the referrals using iGuide as: justified, unjustified, or potentially justified. The final justification label for each referral was determined by majority vote or consensus.
Prior to the feature extraction with bag-of-words (BoW), term frequency-inverse document frequency, and Word2vec models, word tokenisation, stop words removal, and Enchant spell correction of unstructured clinical indications was performed. The dataset was randomly split into stratified training and test sets (80/20). Downsampling to the minority class ensured class balance. Support vector machines, logistic regression, gradient boosting classifier (GBC), multi-layer perceptron, and bidirectional long-short term memory neural network were evaluated. Their hyperparameters were tuned on the training set.
Results or Findings: A total of 11,090 referrals were collected and a random sample of 3,000 were reviewed. 238 (8.1%) were categorised as unjustified, 811 (27.4%) potentially justified, and 1,909 (64.5%) justified by raters.
The best-performing classifier (BoW+GBC) achieved 94.4% accuracy and macro precision, recall, and F1 scores of 0.94.
Conclusion: ML and DL-based approaches can generalise and accurately predict the justification of radiology referrals in accordance with the iGuide categorisation. This may help in addressing poor European justification practices.
Limitations: Downsampling resulted in a smaller dataset for multiclass classification, which, in turn, led to suboptimal performance in DL. A larger, more representative dataset, along with a validation set, may provide better insights into performance.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: The University ethics committee has approved an ethics exemption (LS-E-21-216-Potocnik-Foley) based on the outcomes of local DPIAs.
7 min
Artificial Intelligence in automated protocolling for Finnish brain MRI referrals
Heidi Huhtanen, Turku / Finland
Author Block: H. Huhtanen, M. J. Nyman, A. Karlsson, J. Hirvonen; Turku/FI
Purpose: Advancements in AI-driven models for natural language processing has offered opportunities to automate many menial tasks that require understanding written text. Automating the protocolling of incoming MRI referrals could reduce interruptions in radiologists’ workflow. The purpose of this study is to test different AI models in assigning a suitable protocol and the need for contrast medium for emergency brain MRI referrals.
Methods or Background: For training and testing the models, we collected 1,563 and 390 Finnish emergency brain MRI referral texts, respectively. Data was labelled according to suitable imaging protocol and the need for contrast medium. We trained baseline machine learning (ML) models (three different algorithms) and newer deep learning (DL) models (BERT and GPT3) for classification. We also tested whether using less training data (50% of the training set) or using less data but upsampling it with augmentation affected model performance.
Results or Findings: In protocol and contrast medium prediction, GPT3 outperformed other models with accuracies of 84% and 91%, respectively. BERT models had accuracies of 78% and 89%, and the best ML models 77% and 86%, respectively. For DL models, using less training data affected performance negatively. Upsampling the data with augmentation boosted BERT’s accuracy in the protocol task but not in the contrast medium task. For ML models, neither dataset size nor augmentation seemed to affect performance.
Conclusion: Our results show that there is potential in using AI in automatic protocolling. Although GPT3 outperformed other algorithms, BERT and ML models also performed well. However, the DL models seem to have more potential to improve performance with increasing dataset size, than the ML models.
Limitations: The limitations of this study are the high imbalance between MRI protocol classes and using data from only one institute.
Funding for this study: Funding was provided by the Emil Aaltonen Foundation (grant number: 230049), and the Radiological Society of Finland.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: Review by the ethics committee was waived due to the retrospective nature of this study in accordance with national legislation.

This session will not be streamed, nor will it be available on-demand!