Pixel Pandemonium
EXPO X1, Level -2
Pixel Pandemonium at ECR 2025
The Pixel Pandemonium at ECR 2025 is the perfect platform to explore up-and-coming AI and ML tools for medical imaging. This program offers a unique opportunity to see innovative technology firsthand and get a hands-on experience!
Discover the latest AI/ML technology in the medical imaging sphere. We want to see what is coming and what might have an impact in the next couple of years!
The Pixel Pandemonium Exhibition is endorsed by the Medical Image Computing and Computer Assisted Intervention Society (MICCAI).
Demos in the exhibition include more than 15 software tools showcasing cutting-edge AI technology
Come and find us at the EXPO X1, Level -2 in the AIX area.
- Ben Glocker, Imperial College London: Showcasing generative AI for high-fidelity image synthesis, including chest X-rays, brain MRI, and mammograms. The demo simulates ‘what-if’ scenarios, such as aging or smoking effects on medical images.
- Jameson Merkow, Microsoft: Demonstrating a multi-agent AI system that collaborates to generate treatment plans by synthesizing radiology, pathology, and clinical data, enhancing multidisciplinary workflows and patient outcomes.
- Lauren Cooke, Harvard: Presenting a model that alters chest X-rays with prompts describing radiological findings, allowing visualization of changes for educational purposes and robust synthetic dataset creation.
- Miriam Groeneveld, Radboud UMC: A platform for radiologists to develop and validate AI solutions in biomedical imaging, supporting secure model development, dataset storage, and performance comparison with clinicians.
- Johanna Brosig, Fraunhofer MEVIS: Facilitating automatic mitral valve analysis and minimally invasive surgery planning using augmented reality to visualize heart models for intervention planning and patient education.
- Philip Pratt, Medical Sight: Performing 2D/3D spatial registration of live fluoroscopy to preprocedural CT angiograms, with real-time instrument tracking and augmented reality visualization for surgical guidance.
- Claudia Lindner, University of Manchester: Automating key radiographic measurements for hip surveillance in children with cerebral palsy, reducing clinician workload and ensuring consistent assessments.
- Shirin Heidarikahkesh, Uniklinik Erlangen: VirtuHance generates contrast-enhanced breast MRI images without gadolinium-based agents, using non-contrast MRI images and convolutional neural networks for evaluation.
- Aurelien Bustin, Lyric: Showcasing a research tool for automated cardiac scar segmentation and analysis using a joint bright- and black-blood cardiac MRI sequence, providing comprehensive quantification and clinical reports.
- Jorik Slotman, Isala: An AI-based algorithm for automatic detection and segmentation of brain metastases in radiosurgery patients, integrated into clinical workflows with substantial overlap with expert assessments.
- Daniel Capellán-Martín, Polytechnic University of Madrid: A pediatric brain tumor segmentation AI algorithm with high accuracy, reducing manual annotation workload and aiding in surgical planning and treatment optimization.
- Paul Herent, Raidium: ONCOPILOT, an interactive CT foundation model for automatic 3D segmentation of solid tumors, enhancing tumor volume measurement accuracy and speed while reducing inter-user variability.
- Leonard Nürnberg, Harvard: MHub adapts public AI models into a DICOM-compatible format for easy execution, harmonizing results across models and facilitating lung segmentation directly on public data.
- Hannah Strohm, Fraunhofer MEVIS: ProMedCEUS offers automatic classification of washout features from CEUS acquisitions, with an AI-classifier predicting washout categories and providing uncertainty estimates.
- Niklas Agethen, Fraunhofer MEVIS: A deep learning-based method for automated reconstruction of white matter tracts from raw diffusion MRI data, showing high reconstruction quality near brain tumors.
- Hans Meine, Fraunhofer MEVIS: An uncertainty-driven training loop for AI model training, guiding experts to areas needing correction and requiring only local corrections, improving annotation efficiency.
- Stefan Denner, DKFZ Heidelberg: Kaapana, an open-source platform for integrating AI-driven medical image analysis tools into research workflows, supporting large-scale, federated studies and innovation.
- Felicia Alfano, Polytechnic University of Madrid: A deep learning-based method for preoperative localization of breast tumors, predicting deformations during surgery and visualizing tumor localization in the intraoperative context.