WELCOME
We invite you to explore this year’s theme: “How to trust AI? The Good, the Bad and the AI.”
AI Summit Brainport 2025 will take place on Thursday, November 13, at Evoluon in Eindhoven.
Join leading experts and innovators as we delve into the promise and peril of AI: from explainable AI and verifiable models that lift the veil on the “black box”, to the ethical implications, bias, and fairness that shape AI’s impact on society.
We will address existential risks posed by advanced AI, cutting-edge strategies for AI security, and practical ways to ensure safe deployment of AI models. Who bears responsibility when AI goes wrong, and how can we make accountability actionable?
Through interactive tracks and dynamic sessions, discover how to harness the good, confront the bad, and shape the future of trustworthy AI.
This year's summit introduces two exciting new tracks: Explaining AI and AI Implementation, replacing the Expert and Adoption tracks. Additionally, the EAISI Research track will return, along with the AI Pitch Competition Final.
Contrary to previous years the summit will start at 9.45h but the doors will open at 8.45h so you can have a coffee and do some networking.
Days until the AI Summit Brainport
MORNING PROGRAM
08:45 |
Walk in with tea/coffee, Expo Panel Discussion: How to trust AI? |
AFTERNOON PROGRAM
13:00 |
Lunch break & Expo |


Alix Rübsaam
Vice President Research, Expertise & Knowledge Singularity University
AI & The Opportunities of Responsible Technologies
When AI is used for decision-making, unintended biases can become embedded, impacting fairness, accuracy, and performance. This talk explores how algorithms are designed, where blind spots arise, and how to build more socially responsible systems. Participants will learn to identify risks and opportunities, and how to lead responsibly in the data-driven landscape.
Abstract
Unintended outcomes can become hard-coded into data-driven technologies when decision making is automated with AI, despite our intentions. Any AI holds risks of augmenting, and perpetuating blind spots into their models. These blind spots are not just a risk for unfairness, they can affect the performance of your models, or leave your systems exposed to regulatory implications. The more you understand the inner structure of your AI systems and algorithms, the better equipped you are to leverage these insights to your advantage.
This talk unpacks the workings of the design and decision making that goes into algorithmic systems, how to identify and analyze the mechanisms in AI and how to ensure those are aligned with your goals. We will dive into the limitations and possibilities of leveraging data driven technologies, learn to identify opportunities for responsible AI; and to assess an automated decision making system on its risk for unintended consequences.
Plenary Panel Discussion
How to trust AI?
Lin-Lin Chen, dean of the department of Industrial Design at TU/e

Lin-Lin Chen is dean of the Faculty of Industrial Design, and chair of Design Innovation Strategy at the Eindhoven University of Technology in the Netherlands. She is Editor-in-Chief of International Journal of Design. She was professor in the Department of Design at the National Taiwan University of Science and Technology (NTUST) from 1998 to 2022, dean of the college of design at NTUST from 2004 to 2010, president of the Chinese Institute of Design from 2007 to 2008, and convener for the arts (and design) area committee of Taiwan’s National Science Council from 2009 to 2011. She founded the International Journal of Design (SCI, SSCI, AHCI) in 2007, was president of the International Association of Societies of Design Research (IASDR) from 2017 to 2019, and has been a fellow of the Design Research Society since 2006. Her research focuses on human-AI interaction, aesthetics, interdisciplinary collaboration, and design innovation strategy.
Joep Meindertsma Founder, PauseAI

Leon Kester, Senior Research Scientist TNO

Dr. Leon Kester is a Senior Research Scientist at TNO Netherlands who did his PhD in physics. He is a transdisciplinary scientist in Responsible AI working on the safety, security, meta-ethics and governance of high risk AI systems and XR technologies. His transdisciplinary research integrates perspectives from i.a. human centered AI, systems engineering, cybernetics, cognitive science, moral psychology, law, moral programming and philosophy of science. In recent years, he has published many papers as well as book chapters (most of them with Dr. Nadisha-Marie Aliman, an independent post-doctoral scientist) on the aforementioned topics. In the EU standardization committee CEN/CENELEC JTC21 he is an editor of the European Norm for Risk Management and introduced “Augmented AI system goal specification” for advanced high-risk AI applications.
Alix Rübsaam VP Research, Expertise & Knowledge, Singularity University


Carlo van de Weijer
Carlo will lead the plenary session of the program.
View profile on TU/e website

Carlo van de Weijer
Carlo will lead the plenary session of the program.
View profile on TU/e website
Organizing partners








