EAISI Research Track
How to trust AI?
In the Summit research track, TU/e researchers and partners will present their latest findings, specifically around this theme. In the midst of AI developments that lead to doubts or even fear, we put some necessary focus on the positive influence AI has and will have on our lives and the real world around us. Please join the discussions and get to know in detail what is happening in the center of Brainport’s AI research.
At EAISI 900+ AI-researchers do research on AI systems where the physical, digital, and human worlds come together. EAISI aims to get to a better understanding, better designs, better models, and better decisions in the application areas of Health, Mobility, and High-Tech Systems.
Preliminary Program Research Track
11:00 Coffee break and Expo
11:30 Dominique Fürst - Moderator
11:35 Alessandro Saccon
12:00 Mauro Salazar
12:25 Valentina Breschi
12:35 Meike Nauta & Nienke Bakx
13:00 Lunch break and Expo
14:15 Dominique Fürst - Moderator
14:20 Mathias Funk & Jon Pluyter
14:45 Isel Garcia Grau
15:10 Guang Hu
15:18 Giulia de Pasquale
15:25 Barend de Rooij

Alessandro Saccon
Abstract
Quadrupeds jumping on rocky terrains, humanoid robots performing summersaults, robot manipulators folding laundry: it is an exciting time in the field of robotics.
Still, we do not have a robot filling the dishwasher at home or a humanoid robot building a scaffold on a construction site. Where do we really stand? What have been the enablers of the undeniable progress in robotics research in the last five year? And what is still lacking to build trustworthy AI robotics systems for the industry and our homes?
The talk will provide a perspective on pixel-to-action robot control, the role of physics simulation, the use of machine learning for providing unprecedented levels of fast but potentially faulty perception and motion planning, and the surge of new computational and mechatronics hardware that is meant to be tolerant to collisions and exploit physical contact with the environment, instead of fearing and avoiding it.
Alessandro Saccon received a PhD in control system theory from the University of Padova, Italy, in 2006. He also holds a degree with honors in computer engineering from the same University. Following his PhD, he held a research and development position at the University of Padova in collaboration with racing motorcycle company Ducati Corse. From 2009 to early 2013, he held a post-doctoral research position at the Instituto Superior Técnico, Lisbon, Portugal working on geometric numerical methods for solving trajectory optimization problems.
Alessandro has held visiting positions at the University of Colorado, Boulder (2003 and 2005), at the California Institute of Technology (2006), and at the Australian National University (2011). He is the author and co-author of more than 50 peer-reviewed scientific publications across his various research areas.

Mauro Salazar
In this talk, I will give an overview on the development of autonomous driving technologies, and their evolution from classical modular stacks to end-to-end stacks. I will highlight the safety assurance challenges stemming from their monolithic structures and discuss how they are being addressed by their embedment within safety halos.
Finally, I will step back to reason on the broader deployment of autonomous vehicles within mobility systems. I will propose a path forward that is safe—not only in the technical sense, but also in a societal one: A path that incorporate halos from the social sciences that help us avoid the engineer trap and ensure deployments that transcend technology-for-technology paradigms and are instead rooted in the wellbeing of humans and the planet.
Mauro Salazar is an Assistant Professor in the Control Systems Technology section at Eindhoven University of Technology (TU/e) in the Netherlands, where he leads the MOVEMENT Research Group.
He is also affiliated with Eindhoven AI Systems Institute (EAISI). He earned his PhD in Mechanical Engineering from ETH Zürich in collaboration with the Ferrari Formula 1 team in 2019, before moving to Stanford University for a postdoctoral position on future mobility systems until 2020.
His research focuses on optimization models and methods for cyber-socio-technical systems and control, with applications on sustainable energy and mobility systems that foster justice and wellbeing. He was awarded the ETH Medal for his MSc and PhD theses, and his papers earned several awards, including the Best Student Paper award at the 2018 IEEE Intelligent Transportation Systems Conference and at the 2022 European Control Conference, as well as the Best Paper Award at the 2024 IEEE Vehicle Power and Propulsion Conference.
He was nominated for TU/e Young Researcher Award in 2022 and 2025, and, in 2024, for membership in The Young Academy of KNAW. In 2025, he received the Best Master Teacher Award from the Department of Mechanical Engineering.
After quite some starts and stops, autonomous vehicles are finally becoming a reality. Recent advances in AI have significantly accelerated their development. Classical architectures comprising perception-planning-control have been increasingly replaced by end-to-end architectures that are empowered by foundational AI models, leading to remarkable performance levels.
Yet a crucial question arises: Yet a crucial question arises: How do we guarantee safety in such frameworks so that we can trust them?
back to top

Isel Garcia Grau
In our paper, we propose a model-agnostic post-hoc explanation procedure devoted to computing feature attribution. The proposed method, termed Sparseness-Optimized Feature Importance (SOFI), entails solving an optimization problem related to the sparseness of feature importance explanations. The intuition behind this property is that the model's performance is severely affected after marginalizing the most important features while remaining largely unaffected after marginalizing the least important ones. Existing post-hoc feature attribution methods do not optimize this property directly but rather implement proxies to obtain this behavior. Numerical simulations using both structured (tabular) and unstructured (image) classification datasets show the superiority of our proposal compared with state-of-the-art feature attribution explanation methods.
Isel Grau received her Ph.D. in Computer Science from Vrije Universiteit Brussel (VUB), Belgium, where her research focused on machine learning interpretability and semi-supervised classification. During her postdoctoral work at the Artificial Intelligence Laboratory of VUB, she collaborated on interdisciplinary projects with organizations such as the Interuniversity Institute of Bioinformatics Brussels, Universitair Ziekenhuis Brussel, and Collibra BV.
Isel has co-authored over 75 peer-reviewed publications and has been actively involved in organizing thematic workshops. She frequently serves as a reviewer for leading AI conferences and journals, including ECAI, BNAIC, LION, and IEEE Transactions on Fuzzy Systems. She has also been a visiting researcher at institutions such as Warsaw University of Technology and the University of Lisbon.
The proposed method, termed Sparseness-Optimized Feature Importance (SOFI), entails solving an optimization problem related to the sparseness of feature importance explanations.
back to top

Dominique Fürst
Boundary Spanner Artificial Intelligence at innovation Space and EAISI at TU/e
Dominique Fürst is a Boundary Spanner in Artificial Intelligence and moderator of the EAISI Research track. She is an alumna of the EAISI Academy professional education program in Data Science & AI, where she and her team received the Best Project Award for their work with Enexis, applying reinforcement learning to enable smart charging strategies for electric vehicles.
In her role as Boundary Spanner in AI, Dominique connects students, researchers, and external partners around AI-driven challenges or real-world problems that can be addressed using AI. In close collaboration with diverse stakeholders, she translates these into meaningful challenges within Challenge-Based Learning (CBL) education that align with scientific research. She facilitates multidisciplinary collaboration and ensures that societal and academic goals are brought together to create tangible impact by empowering the next generation of AI-enabled engineers and advancing responsible innovation.
In addition to her AI-focused work, Dominique coordinates the Boundary Spanner Program at TU/e innovation Space, supporting a growing network of TU/e staff who span boundaries across disciplines, institutes, and sectors. TU/e innovation Space is the center of expertise for CBL and student entrepreneurship at Eindhoven University of Technology. It serves as a learning hub for educational innovation and an open community where students, researchers, industry, and societal organizations exchange knowledge and co-create responsible solutions to real-world challenges.
Dominique will moderate the EAISI Research track for the second time.
back to top

Guang Hu
Our paper examines the critical role of trust in artificial intelligence systems that manage multi-source energy integration in sustainable urban environments. Building on our ongoing machine learning analyses of photovoltaic (PV) and building-integrated PV systems, we investigate how AI can optimize interconnected energy networks. Our design-driven research adopts an interdisciplinary approach to model the complex relationships between diverse generation sources (solar, wind, nuclear, thermal), energy storage systems, and consumption sectors (residential, commercial, industrial) at appropriate urban scales. We demonstrate how these energy sources connect through existing and new infrastructure networks for both electricity and heat distribution. The complexity of these systems necessitates advanced AI-driven analytics for optimal operation—defined through multiple objectives including flexibility, sustainability, and cost-effectiveness. This research addresses the central question: "How can we design trustworthy AI systems that balance automation with human oversight in complex urban energy networks?" We propose a comprehensive framework for establishing trust in AI-powered energy integration hubs, focusing on transparency in decision-making, validation through real-world testing, and human-AI collaborative approaches. Through case studies utilizing multi-source energy data, we demonstrate how properly designed trust mechanisms enhance energy efficiency, grid resilience, and stakeholder acceptance while addressing concerns regarding data privacy, algorithmic bias, and system reliability. We invite collaboration from stakeholders willing to share operational energy system data to further validate and refine our models. The findings suggest that trustworthy AI implementation is essential for maximizing the potential of integrated energy systems in sustainable urban development, particularly as cities face increasing challenges from climate change and growing energy demands.
Guang Hu obtained his Ph.D. with distinction from Tsinghua University in 2019. Prior to joining TU/e, he held positions as a postdoctoral researcher at the Karlsruhe Institute of Technology in Germany and as a researcher at the Paul Scherrer Institut (PSI) in Switzerland, building a strong international research profile.
Guang has published in several respected journals in the fields of computational modeling, machine learning applications, and sustainable/nuclear energy. His research contributions span theoretical advancements in modeling techniques and practical applications for energy systems. He actively participates in international research collaborations and serves as a reviewer for journals in his field, contributing to the academic community's knowledge development and quality assurance.
How can we design trustworthy AI systems that balance automation with human oversight in complex urban energy networks?" We propose a comprehensive framework for establishing trust in AI-powered energy integration hubs, focusing on transparency in decision-making, validation through real-world testing, and human-AI collaborative approaches.
back to top

Valentina Breschi
While models grounded on first principles and domain knowledge are highly explainable and, thus, easily trusted by the final user, they struggle with accuracy as the complexity of phenomena to be described increases. Conversely, despite their descriptive power, black-box approaches face challenges in interpretability and handling physical constraints. Understanding how to take advantage of the strengths of these two modeling paradigms and mitigate their shortcomings is crucial to getting accurate models that are also intelligible to the final user.
In this pitch, I will introduce a framework allowing models with different opacities to compete and/or collaborate to accurately and transparently characterize a phenomenon of interest. I will then show how simplifying the employed models allows for performance and uncertainty quantification, providing users with quantifiable insights on how much a model can be trusted. |
Valentina Breschi received her B.Sc. in Electronic and Telecommunication Engineering and her M.Sc. in Electrical and Automation Engineering from the University of Florence (Italy) in 2011 and 2014, respectively. She received her Ph.D. in Control Systems from IMT School for Advanced Studies Lucca (Italy) in 2018, being a visiting scholar at the University of Michigan (USA). From 2018 to 2023, she was with Politecnico di Milano (Italy), first as a post-doctoral researcher and then as a junior assistant professor from 2020 to 2023. In 2023, she joined the Control Systems Group at TU/e as an Assistant Professor.
In this pitch, I will introduce a framework allowing models with different opacities to compete and/or collaborate to accurately and transparently characterize a phenomenon of interest.
back to top

Giulia De Pasquale
Prediction-based decision-making systems are becoming increasingly prevalent in various domains. Previous studies have demonstrated that such systems are vulnerable to runaway feedback loops which exacerbate existing biases. The automated decisions have dynamic feedback effects on the system itself.
In this talk we will show how existence of feedback loops in the machine learning-based decision-making pipeline can perpetuate and reinforce machine learning biases and propose strategies to counteract their undesired effects. Understanding and mitigating these feedback mechanisms is a key step toward developing AI systems we can trust to make fair and reliable decisions.
Giulia De Pasquale is an Assistant Professor in the Control Systems group at TU Eindhoven since 2025. Before that she was a PostDoc and Lecturer at ETH Zurich.
She took the PhD in Control and Systems Engineering from the University of Padova, Italy, in 2023. During PhD, she spent an abroad period as a Visiting Research Scholar at the University of California, Santa Barbara.
She took both the Master Degree in Control Engineering and the Bachelor Degree in Information Engineering from the University of Padova in 2019 and 2017 respectively.
During her Master Degree she took part in the Erasmus program, with a research experience at the Luleå Tekniska Universitet, Sweden, in collaboration with RI.SE.SICS Luleå and ABB Västerås working on system identification for the airflow in datacenters. She also spent six months at ETH Zurich, Switzerland, working on her Master Thesis.
In this pitch I will show how existence of feedback loops in the machine learning-based decision-making pipeline can perpetuate and reinforce machine learning biases and propose strategies to counteract their undesired effects.
back to top

Barend de Rooij
What would it take for us to invest our trust in AI wisely? This question is frequently taken to be about the trustworthiness of AI systems. Depending on one’s definition of trustworthiness, an AI system is worthy of our trust if it is reliable and/or meets certain ethical constraints, such as the absence of pernicious biases.
While we should design systems that are trustworthy in this way, in this talk I will shift our focus away from the qualities of AI systems to the qualities of the people developing, deploying, and interacting with these systems.
Trusting wisely, I will argue, requires that we develop the skills, capacities, and character traits – in short, virtues – that enable epistemic attunement to the capabilities and limitations of AI systems. Construed as a techno-epistemic virtue, trust is a cultivated, context-sensitive capacity to discern when, how, and to what extent we can rely on AI outputs.
Barend is an assistant professor of ethics at Tilburg University, where he does research on AI ethics and human-centered technology. He is especially interested in the role technology plays in (re-)producing racial, gender, and other injustices.
Barend de Rooij joined the Department of Philosophy in January of 2022 after completing a joint PhD in Philosophy at the University of Sheffield (UoS) and the University of Groningen (UG). In his doctoral research, which was supervised by Boudewijn de Bruin (UG) and Miranda Fricker (NYU), he studied the notion of group character, or the idea that collectives can instantiate virtuous or vicious character states qua group.
Before moving to Sheffield, he completed an MA in Philosophy and European Studies at KU Leuven. He also holds an MLitt in Philosophy from the University of St. Andrews. During his BA at Leiden University College, he spent some time studying philosophy at Rutgers University, where he discovered my passion for analytic philosophy.
What would it take for us to invest our trust in AI wisely? This question is frequently taken to be about the trustworthiness of AI systems.
back to top

Mathias Funk DUO Talk
Associate Professor of the department Industrial Design
AI has trust issues, a therapy sessionUnexpected things happen when innovative AI-based solutions meet people on the ground, working in industry, academia and healthcare: busy professionals need reliable tools, but are given “probabilities”. In this talk we dive into the hospital context and what role AI plays in clinical radiology. This talk is not medical advice.
AI has trust issues, a therapy session
Unexpected things happen when innovative AI-based solutions meet people on the ground, working in industry, academia and healthcare: busy professionals need reliable tools, but are given “probabilities”. In this talk we dive into the hospital context and what role AI plays in clinical radiology. This talk is not medical advice.
AI innovation has reached the medical domain, from medical image analysis to the more recent use of genAI in clinical workflows, remote patient management and patient administration. Unexpected things happen when innovative AI-based solutions meet healthcare professionals: they need reliable tools but are given “probabilities” of cancer malignancy and complex scores instead of actionable and accountable insights. How should they make sense of tools that wrap around novel AI models that are essentially black boxes that keep on evolving. How should we anticipate and support physicians that accept everything, or nothing coming from such a model?
In this talk we dive into the complexity of the hospital context with a focus on clinical radiology and explore what role AI can play, what “AI contamination” might be, and how to design strategically and practically in this new reality. Trigger warning: this talk might contain explicit imagery, genAI and spicy opinions.
Dr. Mathias Funk is Associate Professor leading the Computational Design Systems research cluster in the Department of Industrial Design at Eindhoven University of Technology (TU/e). He has a background in Computer Science and a PhD in Electrical Engineering.
His research interests include methods and tools for designing with data, data-enabled design, Human-AI collaboration, and designing systems of smart things. In the past, he has researched at ATR (Japan), Philips Consumer Lifestyle and Philips Experience Design, Canon Production Printing, Intel labs (Santa Clara), National Taiwan University of Science and Technology, and National Taiwan University.
He has co-authored over 130 scientific publications and the book “Coding Art” with Apress/Springer. He is the co-founder of UXsuite, a high-tech spin-off from Eindhoven University of Technology.
DUO Talk Jon Pluyter Senior UX and Clinical Workflow Designer @ Philips, Research Associate at Industrial Design @TU/e
AI has trust issues, a therapy session
Unexpected things happen when innovative AI-based solutions meet people on the ground, working in industry, academia and healthcare: busy professionals need reliable tools, but are given “probabilities”. In this talk we dive into the hospital context and what role AI plays in clinical radiology. This talk is not medical advice.
AI has trust issues, a therapy session
Unexpected things happen when innovative AI-based solutions meet people on the ground, working in industry, academia and healthcare: busy professionals need reliable tools, but are given “probabilities”. In this talk we dive into the hospital context and what role AI plays in clinical radiology. This talk is not medical advice.
AI innovation has reached the medical domain, from medical image analysis to the more recent use of genAI in clinical workflows, remote patient management and patient administration. Unexpected things happen when innovative AI-based solutions meet healthcare professionals: they need reliable tools but are given “probabilities” of cancer malignancy and complex scores instead of actionable and accountable insights. How should they make sense of tools that wrap around novel AI models that are essentially black boxes that keep on evolving. How should we anticipate and support physicians that accept everything, or nothing coming from such a model?
In this talk we dive into the complexity of the hospital context with a focus on clinical radiology and explore what role AI can play, what “AI contamination” might be, and how to design strategically and practically in this new reality. Trigger warning: this talk might contain explicit imagery, genAI and spicy opinions.
He focusses on research and development of smart data and AI driven medical imaging applications to improve complex clinical decision making.
For example, to support teams of radiographers, radiologists and clinicians in the diagnosis and treatment of cancer. He also builds bridges between industry, academic and clinical partners to innovate together, not for doctors but with doctors.


Meike Nauta DUO Talk
Assistant Professor of the department Mathematics and Computer Science at TU/e and senior AI Consultant.
Beyond Explainability: Building Trust through Interactive, Interpretable AIWhat makes an AI system truly effective? We dive into the technical foundations of explainable and interactive AI, and show how these concepts play out in real-world applications. We explore how cutting-edge research and the latest AI models meet industry reality.
Beyond Explainability: Building Trust through Interactive, Interpretable AI
What makes an AI system truly effective? We dive into the technical foundations of explainable and interactive AI, and show how these concepts play out in real-world applications. We explore how cutting-edge research and the latest AI models meet industry reality.
As AI systems become more integrated into decision-making, research into explainable and interactive AI models offers important insights into how we can design systems that are both effective and trustworthy. In this talk, we explore recent academic perspectives on how humans and AI can collaborate more meaningfully through developing interpretable AI models.
We then connect these ideas to real-life use cases from industry, showing how these concepts play out in practice. Through concrete examples, we reflect on the challenges and opportunities of applying academic insights in real-world settings—and what this means for the future of human-AI collaboration.
tba
DUO Talk Nienke BakxAI Consultant
Beyond Explainability: Building Trust through Interactive, Interpretable AI
Beyond Explainability: Building Trust through Interactive, Interpretable AI
What makes an AI system truly effective? We dive into the technical foundations of explainable and interactive AI, and show how these concepts play out in real-world applications. We explore how cutting-edge research and the latest AI models meet industry reality.
back to top
As AI systems become more integrated into decision-making, research into explainable and interactive AI models offers important insights into how we can design systems that are both effective and trustworthy. In this talk, we explore recent academic perspectives on how humans and AI can collaborate more meaningfully through developing interpretable AI models.
We then connect these ideas to real-life use cases from industry, showing how these concepts play out in practice. Through concrete examples, we reflect on the challenges and opportunities of applying academic insights in real-world settings—and what this means for the future of human-AI collaboration.
tba


Bert de Vries DUO Talk
Full Professor at the department of Electrical Engineering
How to Trust AI: Embracing Uncertainty with Probabilistic MethodsIn today's high-stakes business environment, deploying AI systems that stakeholders can genuinely trust isn't optional—it's essential. While organizations rush to adopt AI capabilities, few address the fundamental issue undermining trust: uncertainty. Probabilistic approaches offer a mathematically rigorous solution to this challenge that deterministic methods cannot match.
View profile on TU/e website
back to top
How to Trust AI: Embracing Uncertainty with Probabilistic Methods
In today's high-stakes business environment, deploying AI systems that stakeholders can genuinely trust isn't optional—it's essential. While organizations rush to adopt AI capabilities, few address the fundamental issue undermining trust: uncertainty. Probabilistic approaches offer a mathematically rigorous solution to this challenge that deterministic methods cannot match.
View profile on TU/e website
back to top
Trust in AI hinges on transparency and uncertainty quantification. While AI systems are proliferating across industries, few address a fundamental issue: indicating confidence levels rather than producing single-point predictions. Probabilistic approaches explicitly represent uncertainty alongside outputs, allowing businesses to distinguish between high-confidence predictions and educated guesses, while maintaining clear reasoning chains for proper auditing.
Probabilistic AI enhances trustworthiness by representing knowledge as distributions rather than fixed values. This approach accommodates the inherent uncertainty in business data and updates beliefs as new evidence emerges. Systems using probabilistic methods can appropriately weigh information sources, recognize data insufficiency, and communicate limitations to stakeholders. The result is AI that remains reliable even when facing incomplete information in real-world environments—not just in controlled settings. For enterprises making critical decisions, this honest representation of AI capabilities establishes a foundation for genuine trust and responsible deployment, ultimately reducing risk and enhancing strategic outcomes. |
DUO Talk Albert Podusenko University Researcher at the department of Electrical Engineering and the BIAS lab at TU/e and CEO of Lazy Dynamics.
How to Trust AI: Embracing Uncertainty with Probabilistic Methods
In today's high-stakes business environment, deploying AI systems that stakeholders can genuinely trust isn't optional—it's essential. While organizations rush to adopt AI capabilities, few address the fundamental issue undermining trust: uncertainty. Probabilistic approaches offer a mathematically rigorous solution to this challenge that deterministic methods cannot match.
View profile on TU/e website
back to top
Trust in AI hinges on transparency and uncertainty quantification. While AI systems are proliferating across industries, few address a fundamental issue: indicating confidence levels rather than producing single-point predictions. Probabilistic approaches explicitly represent uncertainty alongside outputs, allowing businesses to distinguish between high-confidence predictions and educated guesses, while maintaining clear reasoning chains for proper auditing.
Probabilistic AI enhances trustworthiness by representing knowledge as distributions rather than fixed values. This approach accommodates the inherent uncertainty in business data and updates beliefs as new evidence emerges. Systems using probabilistic methods can appropriately weigh information sources, recognize data insufficiency, and communicate limitations to stakeholders. The result is AI that remains reliable even when facing incomplete information in real-world environments—not just in controlled settings. For enterprises making critical decisions, this honest representation of AI capabilities establishes a foundation for genuine trust and responsible deployment, ultimately reducing risk and enhancing strategic outcomes. |
Postdoctoral fellow, TU Eindhoven
Albert Podusenko is a founder & CEO of Lazy Dynamics



